Applied Design of Experiments and Taguchi Methods

April 2, 2018 | Author: Palacharla Satyanarayana Chowdary | Category: P Value, Statistical Hypothesis Testing, Normal Distribution, Statistics, Type I And Type Ii Errors


Comments



Description

APPLIED DESIGN OF EXPERIMENTS AND TAGUCHI METHODS10 8 y 6 1 4 -1 0 A 1 0 B -1 K. Krishnaiah P. Shahabudeen Applied Design of Experiments and Taguchi Methods Applied Design of Experiments and Taguchi Methods K. KRISHNAIAH Former Professor and Head Department of Industrial Engineering Anna University, Chennai Professor and Head Department of Industrial Engineering Anna University, Chennai P. SHAHABUDEEN New Delhi-110001 2012 ` 395.00 APPLIED DESIGN OF EXPERIMENTS AND TAGUCHI METHODS K. Krishnaiah and P. Shahabudeen © 2012 by PHI Learning Private Limited, New Delhi. All rights reserved. No part of this book may be reproduced in any form, by mimeograph or any other means, without permission in writing from the publisher. ISBN-978-81-203-4527-0 The export rights of this book are vested solely with the publisher. Published by Asoke K. Ghosh, PHI Learning Private Limited, M-97, Connaught Circus, New Delhi-110001 and Printed by Baba Barkha Nath Printers, Bahadurgarh, Haryana-124507. To My wife Kasthuri Bai and Students — K. Krishnaiah To My Teachers and Students — P. Shahabudeen Contents Preface ............................................................................................................................................... xiii PART I: DESIGN OF EXPERIMENTS 1. REVIEW OF STATISTICS ........................................................................................... 3–21 1.1 Introduction ..................................................................................................................... 3 1.2 Normal Distribution ........................................................................................................ 3 1.2.1 Standard Normal Distribution .......................................................................... 4 1.3. Distribution of Sample Means ........................................................................................ 6 1.4 The t-Distribution ........................................................................................................... 7 1.5 The F-Distribution .......................................................................................................... 7 1.6 Confidence Intervals ....................................................................................................... 7 1.6.1 Confidence Interval on a Single Mean (m) .................................................... 8 1.6.2 Confidence Interval for Difference in Two Means....................................... 8 1.7 Hypotheses Testing ......................................................................................................... 9 1.7.1 Tests on a Single Mean ................................................................................. 11 1.7.2 Tests on Two Means ...................................................................................... 14 1.7.3 Dependent or Correlated Samples ................................................................. 18 Problems ................................................................................................................................. 20 FUNDAMENTALS OF EXPERIMENTAL DESIGN ............................................. 22–48 2.1 Introduction ................................................................................................................... 22 2.2 Experimentation ............................................................................................................ 22 2.2.1 Conventional Test Strategies ......................................................................... 22 2.2.2 Better Test Strategies ...................................................................................... 24 2.2.3 Efficient Test Strategies ................................................................................. 24 2.3 Need for Statistically Designed Experiments ............................................................ 24 2.4 Analysis of Variance .................................................................................................... 25 2.5 Basic Principles of Design .......................................................................................... 26 2.5.1 Replication ....................................................................................................... 26 2.5.2 Randomization ................................................................................................. 26 2.5.3 Blocking ........................................................................................................... 27 vii 2. viii Contents 2.6 2.7 Terminology Used in Design of Experiments .......................................................... 27 Steps in Experimentation ............................................................................................. 28 2.7.1 Problem Statement .......................................................................................... 28 2.7.2 Selection of Factors, Levels and Ranges ..................................................... 28 2.7.3 Selection of Response Variable ..................................................................... 29 2.7.4 Choice of Experimental Design .................................................................... 29 2.7.5 Conducting the Experiment ........................................................................... 30 2.7.6 Analysis of Data.............................................................................................. 30 2.7.7 Conclusions and Recommendations .............................................................. 30 2.8 Choice of Sample Size ................................................................................................ 31 2.8.1 Variable Data .................................................................................................... 31 2.8.2 Attribute Data ................................................................................................... 31 2.9 Normal Probability Plot ............................................................................................... 31 2.9.1 Normal Probability Plotting ........................................................................... 31 2.9.2 Normal Probability Plot on an Ordinary Graph Paper ............................... 32 2.9.3 Half-normal Probability Plotting ................................................................... 33 2.10 Brainstorming ................................................................................................................ 36 2.11 Cause and Effect Analysis .......................................................................................... 36 2.12 Linear Regression ......................................................................................................... 37 2.12.1 Simple Linear Regression Model .................................................................. 37 2.12.2 Multiple Linear Regression Model ............................................................... 41 Problems ................................................................................................................................. 47 3. SINGLE-FACTOR EXPERIMENTS ......................................................................... 49–84 3.1 Introduction ................................................................................................................... 49 3.2 Completely Randomized Design ................................................................................ 49 3.2.1 The Statistical Model ...................................................................................... 49 3.2.2 Typical Data for Single-factor Experiment .................................................. 50 3.2.3 Analysis of Variance ...................................................................................... 51 3.2.4 Computation of Sum of Squares ................................................................... 52 3.2.5 Effect of Coding the Observations ............................................................... 54 3.2.6 Estimation of Model Parameters ................................................................... 55 3.2.7 Model Validation............................................................................................. 56 3.2.8 Analysis of Treatment Means ........................................................................ 60 3.2.9 Multiple Comparisons of Means Using Contrasts ...................................... 67 3.3 Randomized Complete Block Design ........................................................................ 70 3.3.1 Statistical Analysis of the Model .................................................................. 71 3.3.2 Estimating Missing Values in Randomized Block Design......................... 73 3.4 Balanced Incomplete Block Design (BIBD) ............................................................. 74 3.4.1 Statistical Analysis of the Model .................................................................. 75 3.5 Latin Square Design .................................................................................................... 77 3.5.1 The Statistical Model ...................................................................................... 78 3.6 Graeco-Latin Square Design ....................................................................................... 80 3.6.1 The Statistical Model ...................................................................................... 80 Problems ................................................................................................................................. 81 ................................................. 97 4.....................................................................................3 The Three-factor Factorial Experiment ...2 Two-factor Experiments ....................... 101 4....................................... 98 4....7 Addition of Center Points to the 2k Design ..............................................................................1 The Statistical Model for a Three-factor Experiment ....1 Development of Contrast Coefficient Table .............. 112 5....................................................... 110 5.................. 145 6.... 89 4.................4................. 101 4..................1 Assignment of Treatments to Blocks Using Plus–Minus Signs ...................5 Experiments with Random Factors ............................................................................................................. 125 5.................. 152 Problems ...............3 The Regression Model .................................................................................2. 129 5..........................................................2........ 110 5.. 144 6................................................. 92 4..........3....... 140 6...............................2 The 22 Factorial Design ............................................. .................................................................................................................................................... 145 6.............. 121 5.................................2 Assignment of Treatments Using Defining Contrast ...........2 Yates Algorithm for the 2k Design .........................................................4..............2 Blocking in Replicated Designs .....................................5..............3 Confounding .......................................................................................5 The General 2k Design ............................1 Introduction ........2 Development of Contrast Coefficients ..................................................... 142 6.........4........................ 110–139 5................. 85 4................... 85 4................................. 115 5...................... 6..................... 101 4....................7 Confounding 2k Design in Four Blocks ..............3 The 23 Factorial Design ..........2...........2.....................................................................................................................2 Determining Expected Mean Squares .......4............1 Introduction ....... 142 6........................................................ 125 5..... 133 Problems .....................................................................4 The 2k Factorial Design in Two Blocks ....................................................................................5...........................................3................................................................... 93 4..........5 Complete Confounding .................................8 Confounding 2k Factorial Design in 2m Blocks.........3.............. 124 5..................................................................................1 Determining the Factor Effects .............Contents ix 4...........................................3 The Approximate F-test ......... 106 Problems .... 146 6.................... 143 6...... 105 4.........2 Estimation of Model Parameters ...............6 Partial Confounding .......4 Statistical Analysis of the Model .6 Rules for Deriving Degrees of Freedom and Sum of Squares ............ 152 5..........3................................... 140 6.1 Introduction ...................................................... 105 4.................. 106 THE 2k FACTORIAL EXPERIMENTS .................................6...............................................................................................2 Rule for Computing Sum of Squares ..................................................... 128 5........... MULTI-FACTOR FACTORIAL EXPERIMENTS ................................................1 Rule for Degrees of Freedom ......... 150 6. 87 4..............1 The Statistical Model for a Two-factor Experiment ...................1 The Statistical Model .....................................................................................6 The Single Replicate of the 2k Design ............................................ 140–153 6............................ 137 BLOCKING AND CONFOUNDING IN 2k FACTORIAL DESIGNS ........ 121 5............. 115 5..........................................4 Randomized Block Factorial Experiments ........................................5. 85–109 4...................... 105 4...........................................................3 The Regression Model ...........................................................................................................2.....................................................................6..........................................1 Random Effects Model ...........................3 Data Analysis from Confounding Designs .................. ......................................2 Response Surface Designs ....................................................................... 163 7............................................ 170 8............................2 Taguchi Definition of Quality ......................................................................... 187 9.............................................................................................. 169–184 8..................x Contents 7............................................................ PART II: TAGUCHI METHODS 9.......................3 Taguchi Quality Loss Function ......................... 200 10........................ 173 8................................................................ 169 8.......1 Introduction ......................... TWO-LEVEL FRACTIONAL FACTORIAL DESIGNS ................................... 187–197 9........................................................................................................................................... 174 8....................................... 196 TAGUCHI METHODS .. 163 7.......................2.............................................. 188 9... 188 9.................................................................. 157 7............................................................................ QUALITY LOSS FUNCTION ......................................................................4......................................................... 187 9................................ 191 9....................3....2..............................................................................................................3.......................................... 154 7........................ 162 7.......................................... 198 10....................................3 Quality Loss Function (Larger—the better case) ......3 Box–Behnken Designs ... 198 10......... 199 10..............................3............. 164 Problems ...............1 Designs for Fitting First-order Model ...................... 154 7... 195 Problems .............................. ......2 The One-half Fraction of the 2k Design...............2 Taguchi Methods .1 System Design .................. 174 8............ 195 9......1 Introduction ...........................2 Quality Loss Function (Smaller—the better case) ...........................2 Analysis of Second-order Design .................................... 199 10...1 Traditional Method ......................................................................8 Fold-over Designs ..... 157 7........3................................................................................................1 Quality Loss Function (Nominal—the best case) .............................2 Quality Loss Function Method ..............................3 Tolerance Design....... 170 8............... 190 9............ 199 10...............3....................................................................................................................................................... 198–201 10..............................4...............2 Parameter Design ...........................................................................3 Design Resolution ................ 184 8............................................................................................... 168 RESPONSE SURFACE METHODS ................................... 200 10.................7 Fractional Designs with Specified Number of Runs ..........................................................................................1 Development of Orthogonal Designs ...........................................4 Construction of One-half Fraction with Highest Resolution............................................................ 180 Problems ..............................4 Estimation of Quality Loss ..3 Analysis of Data From RSM Designs .............................................................3.......................................................................6 The 2k–m Fractional Factorial Design .................1 Introduction .................................. 154–168 7................................................................. 195 9.....................................1 Analysis of First-order Design ...................................5 The One-quarter Fraction of 2k Design ...............................3 Robust Design ........... 170 8.....3..................3................................................................2...2 Central Composite Design (CCD) ................................................2........1 Introduction ....... ............. 252 Problems .............. 204 11...........6 Confirmation Experiment .......... 217 12..5 Steps in Experimentation ............ 236 13.. 224 12..... 211–233 12.....2..................1 Introduction ..... 14.................. 256 14............................. 200 10..............................................................7......1 Confidence Interval for a Treatment Mean .............................. DESIGN OF EXPERIMENTS USING ORTHOGONAL ARRAYS ......................7 Confidence Intervals ...................6 Inner/Outer OA Parameter Design ............4 Advantages of Robust Design ..................... 201 11...................................2..........................................................5.................. 256–272 14.............2 Assignment of Factors and Interactions ... ......................................................................................... 229 12.... 272 12............................................................2.......3 Objective Functions in Robust Design ..................................................................................4 Basis of Taguchi Methods ........... 245 13...............................................1 Introduction .................................7.......................... 209 DATA ANALYSIS FROM TAGUCHI EXPERIMENTS ..............2.......... 234 13......................5 Attribute Data Analysis ..................................................................................................................... 203 11......................... 254 MULTI-LEVEL FACTOR DESIGNS ....................................................................................................................................................4 Variable Data with a Single Replicate and Vacant Column .....3 Selection and Application of Orthogonal Arrays .......................................................... 226 12......................... 235 13...............................................2...........................................................7............................................................................ 229 12...................2 Considering the Two-Class Data as 0 and 1 .................................................................................1 Treating Defectives as Variable Data ..................................................5 Simple Parameter Design .................................................................... 13........................................ 262 14............................................................... 204 Problems ........................................................... 201 Problems .......................................................................1 Introduction ......5...................4 Idle Column Method ...................2 Confidence Interval for Predicted Mean .................................... 211 12................... 231 ROBUST DESIGN .................................................... 238 13.....................2 Dummy Treatment ........................................... 230 12....................................................7 Relation between S/N Ratio and Quality Loss ............................................................. 229 12............................................. 221 12..................................................2 Variable Data with Main Factors Only .....................................................................................................................3 Confidence Interval for the Confirmation Experiment ................ 256 14....................................1 Linear Graph ...........................................................5.... 228 12............ 202–210 11............ 238 13........................3 Combination Method ................................................................................................................................................................. 257 14......1 Merging Columns .... 224 12.................................. 211 12......................................................................................................................3 Variable Data with Interactions .......................................................2 Methods for Multi-level Factor Designs .........................................................................................................................................................................................1 Introduction ...................................... 234–255 13................ 265 Problems ...... 202 11......................................3 Transformation of Percentage Data .....2 Factors Affecting Response .Contents xi 10........................................... 230 Problems ....................................................... 258 14.......................................... .............................................. 285 15.............................................................................................................................................xii 15................................5 Wave Soldering Process Optimization ..................3 A Study on the Eye Strain of VDT Users ............................. 357–358 Index .......................................6 Factor Analysis ................................................................................................... 273–296 15................................5 Grey Relational Analysis ....... 295 CASE STUDIES .............. 359–362 ..................................4 Data Envelopment Analysis based Ranking Method ............................................................................. 317 16.......2 Engineering Judgment .................................. 278 15.............1 Introduction ............................................................................................. 312 16.............................................................................................................................................................................................................................. 322 16........................................................................ 297 16...................... Appendices ................... Contents MULTI-RESPONSE OPTIMIZATION PROBLEMS ................................................. 274 15....7 Genetic Algorithm . 307 16................. 297–325 16............... 274 15...............................................................................................3 Assignment of Weights .1 Maximization of Food Color Extract from a Super Critical Fluid Extraction Process ............................................................................................................................................................................ 327–355 References ................................................6 Application of L27 OA ...................................................................................................... 302 16.................................................4 Optimization of Flash Butt Welding Process ...................................................2 Automotive Disc Pad Manufacturing ..................................................... 273 15............... 280 15............ 290 Problems ................................................... C. Sathish for providing computer output for some of the examples. Systems Engineering and Operations Research.Preface Design of Experiments (DOE) is an off-line quality assurance technique used to achieve best performance of products and processes. Any suggestions for improvement of this book are most welcome. One of the reasons could be that this subject is not taught in many academic institutions. We hope that this book will be easy to follow on the first time reading itself. scientists and industrial participants. Rajmohan. K. (ii) conduct of experiment. Selvakumar and Mr. Padmanabhan and the research scholars Mr. We thank our P. Though the subject of DOE is as old as Statistics. Robust design is a methodology used to design products and processes such that their performance is insensitive to noise factors. This book can be used as a textbook for undergraduate students of Industrial Engineering and postgraduate students of Mechanical Engineering. M. We are thankful to them. SQC & OR and Statistics. Designing the experiment suitable to a particular problem situation is an important issue in DOE. S. SHAHABUDEEN xiii . Manufacturing Systems Management.M. We express our gratitude and appreciation to our colleagues in the Department of Industrial Engineering. Krishnaiah has conducted several training programmes for researchers. Chennai. This consists of (i) the design of experiment. Dr. K. He has also trained engineers and scientists in some organizations. The case studies used in this book were conducted by our project/research students under our supervision. Baskaran and Mr. this subject is being taught by the authors for the past fifteen years in the Department of Industrial Engineering. Dr. However. for their encouragement and support. Theophilus for assisting in typing and correcting some of the chapters of the manuscript. We also thank the students whose assignment problems/data have been used. Those with a little or no statistical background can also find it very easy to understand and apply to practical situations. its application in industry is very much limited especially in the developing countries including India. and (iii) analysis of data. Chennai. R. Anna University. Our sincere thanks are due to the editorial and production teams of PHI Learning for their meticulous processing of the manuscript of this book. This book addresses the traditional experimental designs (Part I) as well as Taguchi Methods (Part II) including robust design. KRISHNAIAH P. L. student Mr.G. We also thank our faculty members Dr. Using their experience and expertise this book is written. Anna University. Design of Experiments PART I . This definition is applicable for both infinite and finite population.C H A P T E R 1 Review of Statistics 1. . marks obtained by students. n i = 1.. A sub set of data taken from some large population or process is a sample. 3 . The value of population parameter is always constant.1) Sample variance (S2) = 2 2 2  ( Xi  X )  X i  nX = n  1 n  1 (1. . It is a distribution of continuous random variables describing height of students.. Usually the shape of normal distribution curve is bell shaped. 3.. it is called a random sample. etc.2) (1.. weight of people.1 INTRODUCTION Population: Sample: A large group of data or a large number of measurements is called population.. Xn) Sample mean ( X ) =  Xi . That is. 1. there is only one value of mÿ and s. X2.2 NORMAL DISTRIBUTION The normal distribution is a continuous probability distribution.. 2. process output measurements. Random sample: If each item in the population has an equal opportunity of being selected. for any population data set.3) Sample standard deviation (S) = S2 Population parameters The population parameters for mean and standard deviation are denoted by m and s respectively. n (1. Sample statistics Suppose we have n individual data (X1. A random sample of size n if selected will be independently and identically distributed. total area under the curve is 1. e = 2. usually we represent it as x ~ N(m. the curve does not touch the axis. indicating that the data is normally distributed with mean m and variance s 2. s 2). distribution is defined by two parameters mÿ and s. Though the curve extends to infinity. 2.4) f(x) gives the vertical distance between the horizontal axis and the curve at point x. 1.1 shows a typical normal curve. tail on either side of the curve extends to infinity. Since the area beyond N–3T FIGURE 1. 1  T 2Q e 1  xN    2 T  2 – ‘¥ < x < ¥ (1. The The The The curve is symmetric about the mean. m ± 3sÿ is very small. Figure 1. for general use. 4. then the random variable z= X  N T .0 or 100%. This distribution facilitates easy calculation of area between any two points under the curve.1 N Normal curve. Knowing the two parameters of the curve. 3. we consider this area as zero.14159 approximately. If any data (x ) is normally distributed. N+3T X The probability density function for a normal distribution.1 Standard Normal Distribution Standard normal distribution is a special case of normal distribution.2. Its mean is 0 and variance is 1.4 Applied Design of Experiments and Taguchi Methods Characteristics of normal distribution 1. Suppose x is a continuous random variable that has a normal distribution N(m. we can compute the area for any interval. s 2). f(x) is given by f (x ) = where.71828 and p = 3. The shafts that are with 2. The z values on the right side of the mean are positive and on the left side are negative. For computing the area under the curve from z = 0 (– ¥) and any point x. denoted by z ~ N(0.0 cm Standard deviation (s) = 0.5) Corresponding to this z value. T=1 –3 –2 –1 N=0 1 2 3 z FIGURE 1. For example. The standard normal distribution table is given in Appendix A.Review of Statistics 5 follow standard normal distribution (Figure 1.2).1 The diameter of shafts manufactured is normally distributed with a mean of 3. we compute the value of zx = x  N T (1.98 Now let us determine the Z value corresponding to U and L ZU = U  N T = 3.02  3. indicates that the point is 1 standard deviation to the right of the mean.009 = 2.02 cm are reworked.0 cm and a standard deviation of 0. 1). the area on each side of the mean is 0. The z value for a point (x) on the horizontal axis gives the distance between the mean and that point in terms of the standard deviation.009 cm.2 Standard normal distribution curve. Figure 1.009 cm Let upper limit for rework (U) = 3.00 and the curve is symmetric about the mean. we obtain the area from Table A.09. Since the total area under the curve is 1. SOLUTION: Mean (m) = 3.1.98 cm or less diameter are scrapped and shafts with diameter more than 3.1. Determine the percentage of shafts scrapped and percentage of rework.00 0. a point with a value of z = 1. And z = –2 indicates that the point is 2 standard deviations to the left of the mean.2 shows the standard normal distribution. The horizontal axis of this curve is represented by z.02 cm Lower limit at which shafts are scrapped (L) = 2. The centre point (mean) is labelled as 0. ILLUSTRATION 1.5.00 to 3.22 . This table gives the areas under the standard normal curve between z = 0 and the values of z from 0. 22 Z 1. 0. percentage of rework = 1.6 Applied Design of Experiments and Taguchi Methods ZL = L  N T = 2. The distribution of these sample means is termed sampling distribution of X . That is (1.6) N n N 1 TX = T n Equation 1.3 shows the probability calculation for the Illustration 1.0132 or 1.4868 = 0.7) . Suppose we draw m samples of size n from a population.32% (scrap) Figure 1.32 Similarly. The value of each sample mean ( X ) will be different and the sample mean X is a random variable. Else.6 is applicable when n/N £ 0.0132 0.00 0.0132 –2. 2.22 N=0 FIGURE 1. TX = T n N n N 1 (1. we have to use a correction factor That is .22) = 0.05.22) = 0.0132 or 1.32% That is.5 – 0. Mean of the sampling distribution The mean of the sampling distribution is the mean of all the sample means and is denoted by NX .009 =  2.3. The mean of the population (m) is estimated by NX . DISTRIBUTION OF SAMPLE MEANS The distribution of a sample statistic is called sampling distribution. P(ZL < –2.3 Finding tail area beyond Z value.1.4868 = 0.98  3.22 From standard normal tables P(ZU > 2. Standard deviation of the sampling distribution The standard deviation of the sampling distribution is denoted by T X and is equal to T / n .5 – 0. The F-statistic is named after Sir Ronald Fisher. 1.2. The t-distribution is also symmetric about the mean. The value in F-table gives the right tail area for a given set of n1 and n2 degrees of freedom. Table of percentage points of the F-distribution is given in Appendix A. . the shape of sampling distribution is approximately normal irrespective of the population distribution. the numerator degrees of freedom (n1 ) and the denominator degrees of freedom (n2). So.4 THE t-DISTRIBUTION The t -distribution is also known as student’s t-distribution. If the samples are drawn from normal population N(m. As the sample size increases. Suppose X1. in general we can make use of the characteristics of normal distribution for studying the distribution of sample means. 1.7. Such an estimate is called point estimate. The degrees of freedom for t-distribution are the sample size minus one..6 CONFIDENCE INTERVALS We often estimate the value of a parameter. If X and S2 are X  N computed from this sample are independent. . It is similar to normal distribution in some aspects. This sample mean is used to estimate the population mean. The shape of the t-distribution curve depends on the number of degrees of freedom. the degrees of freedom. s2). the t-distribution approaches the normal distribution. The t -distribution has only one parameter. the random variable has a t-distribution S/ n with n – 1 degrees of freedom. Table of percentage points of the t -distribution is given in Appendix A.5 THE F-DISTRIBUTION The F-distribution is defined by two numbers of degrees of freedom. As the sample size increases (n ³ 30).. Xn is a random sample from N(m. Shape of sampling distribution The shape of the sampling distribution depends on whether the samples are drawn from normal population or non-normal population. the shape of its sampling distribution will be approximately normal (from central limit theorem). indicates normal population with mean m and variance s2.. 1. The distribution is skewed to right and decreases with increase of degrees of freedom. The accuracy of this estimate largely depends on the sample size. Note that N(m. It is some what flatter than the normal curve. X2. It always differs from the true value of population mean. The F-statistic is used to test the hypothesis in ANOVA. The standard deviation of t-distribution is always greater than one. s2). s 2) distribution. Its application is discussed in Section 1.3. say the height of college’s male students from a random sample of size n by computing the sample mean.Review of Statistics 7 The standard deviation of sampling distribution T X is also called standard error of mean. If samples are drawn from non-normal population. the shape of its sampling distribution is also normal. 2 Confidence Interval for Difference in Two Means Case 1: Large samples The (1 – a) 100% confidence interval for m1 – m2 is X1  X 2  ZB /2 2 T1 n1 + 2 T2 n2 . n 1 SX (1. if s1 and s2 are not known (1. if s is known s is not known (1.6. Case 2: Small samples (n < 30) The (1 – a) 100% confidence interval for m is X  tB /2. 1.11) X1  X 2  ZB /2 2 S1 n1 + 2 S2 n2 .1 Confidence Interval on a Single Mean (m) Case 1: Large samples (n > 30) The (1 – a) 100% confidence interval for m is X  ZB /2 T X . 1. It is denoted by (1– a) 100%. if s1 and s2 are known (1.13) where Sp = 2 2 (n1  1) S1 + (n2  1) S2 n1 + n2  2 .8 Applied Design of Experiments and Taguchi Methods Instead. Z is the standard normal deviate corresponding to the given confidence level.9) where.8) (1. And (1 –ÿa) is called the confidence coefficient. we use an interval estimate by constructing around the point estimate and we make a probability statement that this interval contains the corresponding population parameter.10) where the value of t is obtained from the t-distribution corresponding to n – 1 degrees of freedom for the given confidence level.6. where a is called the significance level. The extent of confidence we have that this interval contains the true population parameter is called the confidence level. if X  ZB /2 SX . These interval statements are called confidence intervals.12) Case 2: Small samples The (1 – a) 100% confidence interval for m1 – m2 is X1  X 2  tB /2 S p 1 n1 + 1 n2 (1. 1. That is. is that the average life of bulbs is less than 1000 hr. that is assumed to be true. that is true if null hypothesis is false. In reality it may or may not be true. Such a rule is usually based on sample statistics called test statistics. Type I error: The null hypothesis is true. we have either one tail test or two tail test. For this. Suppose we have the null and alternative hypothesis as follows: H0: m = m0 H1: m ¹ m0 .Review of Statistics 9 The t -value is obtained from t-distribution for the given confidence level and n1 + n2 – 2 degrees of freedom. The probability of Type II error is denoted by b. There are two types of hypothesis. bÿ = P(H0 is accepted | H0 is false) (1 – b) is called the power of the test. It is written as H1: m < 1000 hr A test of hypothesis is simply a rule by which a hypothesis is either accepted or rejected. It denotes the probability of not committing the Type II error. the average life m = 1000 hr. but rejected. 1.7 HYPOTHESES TESTING A statistical hypothesis is an assumption about the population being sampled. If it is true. the null hypothesis is written as H0 : m = 1000 hr Alternative hypothesis An alternative hypothesis is a claim or a statement about a population parameter. Null hypothesis (H0) 2. a company manufacturing electric bulbs claims that the average life of their bulbs (m) is 1000 hours. The probability of Type I error is denoted by a. the alternative hypothesis of life of bulbs. For example. Since it is based on sample statistics computed from n observations. some alternative hypothesis is true. the decision is subject to two types of errors. For example. a= P(H0 is rejected | H0 is true) Type II error: The hypothesis is accepted when it is not true. Alternative hypothesis (H1) Null hypothesis A null hypothesis is a claim or statement about a population parameter. The value of a represents the significance level of the test. Tails of a test: Depending on the type of alternative hypothesis. the rejection region will exist only on one side of the tail depending on the type of alternative hypothesis. ACCEPTANCE REGION Rejection region Rejection region B/2 B/2 C1 N C1. Figure 1.5 shows the p-value for a left tail test. the p-value is twice the area in the tail of the sampling distribution curve beyond the observed sample statistic.4).4 A two tailed test. it is a right tail test. For a two tail test. we have a two tail test (Figure 1.C2 are the critical values C2 X FIGURE 1. it is a left tail test. Then we find the area under the tail of the normal distribution curve beyond this value of Z. If we have a predetermined value of a. we compare the p-value with a and arrive at a decision. In a two tail test the rejection region will be on both tails and the a value is equally divided. In the case of one tail test. the p-value is given by the area in the tail of the sampling distribution curve beyond the observed value of the sample statistic. .10 Applied Design of Experiments and Taguchi Methods In this case. This value is called observed value of Z. Figure 1. p-value approach P-value approach is defined as the smallest value of significance level (a) at which the stated null hypothesis is rejected. This area gives the p-value or one-half p-value depending on whether it is a one tail test or two tail test. If the alternative hypothesis is H1 : m > m0 . Here we try to determine the p-value for the test. and if H1: m < m0 . Suppose we compute the value of Z for a test as Z = (X  N )/T x . We reject the null hypothesis if a > p-value and we do not reject the null hypothesis if or or p-value < p-value ³ a a a£ p-value For a one tail test.6 shows the p-value for a two tail test. 6 à X Observed value X p-value for a two tailed test.7. 1.5 p-value for a left tailed test.1 Tests on a Single Mean The null and alternative hypothesis for this test is as follows: H0 : m = m0 H1: m ¹ m0 or m > m0 or m < m0 (depends on the type of problem) Case 1: When s is known The test statistic is Z0 = X  N0 TX (1. X Sum of these two areas is the p-value N FIGURE 1.Review of Statistics 11 p-Value X Observed value N FIGURE 1.14) where X is the sample mean and n is the sample size and Reject H0 if | Z0 | > Za/2 (or Z0 > Za or n Z0 < – Za depending on type of H1) TX = T . the critical value for Z0.41 is 0.5/ 9 2 0. (ii) What is the p-value for the test? (iii) Determine the 95% confidence interval for the mean time. From a sample of 100 students studied recently found that they spend on average 9 hours per week on Internet with a standard deviation of 2. we reject the null hypothesis and conclude that the tensile strength is less than 50 kg/cm2.5 kg/cm2. From standard normal table. if Z0 < – Z0. (ii) p-value approach: –2. ILLUSTRATION 1. Hence. From past experience it is known that the standard deviation of tensile strength is 2. the test statistic is X  N0 48  50 2.3 A study was conducted a year back which claims that the high school students spend on an average 11 hours per week on Internet.14). the p-value for the test is 0. in which case T X is replaced by S X in Eq. the tail area under the curve beyond So. Use a = 0. .008.05. (i) State the appropriate hypotheses for this experiment and test the hypotheses using a = 0. Since the a-value is more than the p-value. it is found that the mean tensile strength is 48 kg/cm2 .008. What is your conclusion? (ii) What is your decision based on the p-value? SOLUTION: (i) The hypotheses to be tested are H0 : m ³ 50 kg/cm2 H1 : m < 50 kg/cm2 Since the standard deviation is known.05 = 1.12 Applied Design of Experiments and Taguchi Methods This test is applicable for a normal population with known variance or if the population is non-normal but the sample size is large (n > 30). we reject the null hypothesis.83 =  2. (1.41 Z0 = T _ x = = Reject H0.05. (i) Test the hypotheses that the current students spend less than 11 hours on Internet.05 From standard normal table. ILLUSTRATION 1.2 The tensile strength of fabric is required to be at least 50 kg/cm2.2 hours. From a random sample of 9 specimens.65. 57 £ m £ 9. if Z0 < – Z0. H0 : m £ 45 minutes H1 : m > 45 minutes t0 = X  N0 Sx = 49.09. Test whether the company’s claim is true.05 From standard normal table. the critical value for Z0. The conclusion is that the current students on average spend less time than the time found earlier. we reject the null hypothesis.2/ 100 =  2 0. we reject the null hypothesis. Therefore.n–1 .15) where.05 = 1.7 Reject H0 if t 0 > t a. (ii) From standard normal tables.09 Reject H0. H0 : m = 11 hr H1 : m < 11 hr Z0 = X  N0 Sx = 9  11 2. Since a = 0. Use a = 0. A random sample of 20 bags of cement selected and tested showed an average settling time of 49.5 0.5  45 3/ 20 = 4.4 t0 < – ta. the area under the curve can be taken as zero. Hence.n–1 or ILLUSTRATION 1. Sx = S/ n Reject H0 if | t0 | > ta/2.05.2  X  ZB /2 T X = 9  1. (iii) The confidence interval is given by  2.5 minutes with a standard deviation of 3 minutes. we use the t -statistic.43 Case 2: When s is unknown normal population The test statistic is t0 = X  N0 Sx (when n < 30) (1.43 8. Hence.96    100  = 9 ± 0. corresponding to Z = – 9. SOLUTION: Here the sample size is small (n < 30).Review of Statistics 13 SOLUTION: (i) Note that the sample size is large (n > 30) and hence Z statistic is applicable.22 =  9.n–1 (depending on type of H1) A cement manufacturer claims that the mean settling time of his cement is not more than 45 minutes.67 = 6.05 > p-value.n–1 or t0 > ta.65. the p-value = 0. The mean of the first population is less than the second population mean m1 < m2 which is equivalent to m1 – m2 < 0 2 Case 1: When variances are known T 1 and H0 : m1 = m2 H1 : m1 ¹ÿ m2 or m1 > m2 or m1 < m2 2 T2 The test statistic is Z0 = ( X1  X 2 )  (N1  N2 ) 2 T1 n1 + 2 T2 (1.05 = 1. Use (ii) Find the p-value for the test. The mean of the first population is more than the second population mean m1 m1 ¹ÿ m2 which is same as m1 > m2 which is equivalent to m1 – m2 > 0 3. the probability is 0. The inference is that the settling time is greater than 45 minutes.05.0885. ILLUSTRATION 1.2 kg.2 Tests on Two Means Here the two samples are assumed as independent. t0. 1.35.5  10 2.5 kg with a standard deviation of 2. That is. the claim made by the gym is true.0885. Hence. a= 0. (ii) Corresponding to the Z value of 1. Since the a-value is less than the p-value. From a random sample of 36 individuals it was found that the average weight loss was 9.5 A gym claims that their weight loss exercise causes an average weight reduction of at least 10 kg. we test the hypothesis m1 – m2. The two population means are different – m2 ¹ 0 2. we do not reject the null hypothesis. (i) Test the claim of the gym.19 = 1.05 From standard normal table.729. we do not reject the null hypothesis. The alternative hypothesis may be 1. from standard normal tables.35 Reject H0.2/ 36 =  1. if Z0 < –Z0.65. When we have two population means m1 and m2. the critical value for Z0.16) n2 .14 Applied Design of Experiments and Taguchi Methods From t -table. the p-value = 0.7.05. SOLUTION: (i) The null and alternative hypotheses are: H0 : m ³ 10 kg H1 : m < 10 kg Z0 = X  N0 Sx = 9. That is. Hence we reject the null hypothesis. 1).05 = 1. Hence we reject H0.6 Standard deviation of drying time (hr) 11 9 Mean drying time (hr) 44 49 Test whether the company’s claim is true at 5% significance level. (1. TABLE 1. p-value approach: Since the The p-value for the test is 0.1 Brand A B Sample size 25 22 Illustration 1.16) are replaced by S1 and S2. s1 and s2 in Eq. SOLUTION: H0 : m1 = m2 H1 : m1 < m2 The test statistic is Z0 = X1  X 2 2 T1 n1 Reject H0 if Z0 < – Za + 2 T2 (m1 – m2 = 0 from H0) n2 Z0 = X1  X 2 2 T1 n1 + 2 T2 = 44  49 112 25 + 92 22 =  5 2. Z0.71 n2 From normal table.16) is substituted from H0. (1. Reject H0 if | Z0 | > Za/2 (or Z0 > Za or Z0 < –Za depending on type of H1) If variances are not known and sample size is large (n > 30). Also construct the 95% confidence interval for the difference in the two means.Review of Statistics 15 The value of m1 – m2 in Eq.65. we reject the null hypothesis. the mean drying time of both the brands is different.6 A company manufacturing clay bricks claims that their bricks (Brand A) dry faster than its rival company’s Brand B. That is.92 =  1. A customer tested both brands by selecting samples randomly and the following results have been obtained (Table 1. a-value is more than the p-value.0436. ILLUSTRATION 1. . 40.2 25 + 20  2 2 2 1   1 +   25 20   = 9.16 Applied Design of Experiments and Taguchi Methods Confidence interval: The (1 –ÿ a) 100% confidence interval for m1 – m2 is 2 T1 X1  X 2  ZB /2 n1 + 2 T2 n2 = 5 ± 1. SOLUTION: Here it is assumed that the standard deviations are unknown but are equal. Assume that the wages follow normal distribution with equal but unknown population standard deviations. test whether the wages of male workers is same as that of female workers.7 In the construction industry a study was undertaken to find out whether male workers are paid more than the female workers.20 from a sample of 20.7  106. Whereas the average wages of female workers were ` 106.70 with a standard deviation of ` 13.63 = 2.17) The value of m1 – m2 is substituted from H0 . H0 : m1 = m2 H1 : m1 ¹ m2 The test statistic is t0 = X1  X 2 2 (n1  1) + (n2  1) S2 1  1 +   n1 + n2  2 n2   n1 2 S1 Reject H0 if | t 0 | > t a/2. Using 5% significance level. Reject H0 if | t0 | > ta/2.96(2. From a sample of 25 male workers. it was found that their average wages was ` 115.0 24  13. Applicable when the sample sizes are less than 30. n where n = n1 + n2 – 2.72 or – 0.4 + 19  10.72 £ m1 – m2 £ 10.72 2 T2 ) 2 = Case 2: When variances unknown: normal populations ( T 1 H0 : m1 = m2 H1 : m1 ¹ m2 The test statistic is t0 = (X1  X2 )  (N1  N2 ) 2 2 (n1  1) S1 + (n2  1) S2 1  1 +   n1 + n2  2 n n 2   1 (1.92) = 5 ± 5.672 . n where n = n1 + n2 – 2 t0 = 115.7 3.0 with a standard deviation of ` 10. ILLUSTRATION 1. 695 and the corresponding significance level is 0.672 at 43 degrees of freedom.Review of Statistics 17 From t-table. t 0.01. Since the a-value is more than the p-value.43 = 2. the approximate p-value is equal to 0.2 Population Males Females Sample size 25 20 Illustration 1.025. From t-table we may not always be able to find the tail area matching to the computed value (t0).005.18) where.19) Reject H0 if | t0 | > ta/2. The data obtained are given in Table 1. Since it is a two tail test. n or t0 < – ta . Hence.005 ´ 2 = 0. 2 Case 3: When variances unknown: normal populations ( T 1  H0 : m1 = m2 H1 : m1 > m2 or m1 < m2 2 T2 ) The test statistic is t0 = (X1  X 2 )  (N1  N2 ) 2 S1 S2 + 2 n1 n2 with n degrees of freedom (1. TABLE 1.n or t0 > ta . Now for this problem the nearest tail area for t 0 = 2.8 A study on the pattern of spending in shopping has been conducted to find out whether the spending is same between male and female adult shoppers. In such a case we try to find the nearest area to t0. we reject H0. we reject the null hypothesis. the average wages paid to male and female workers is significantly different.672 at 43 degrees of freedom is 2. Test whether the difference in the two means is significant at 5% level.017. The p-value approach: To find the p-value we first find the significance level corresponding to the tail area t = 2. ILLUSTRATION 1.5 14. That is. n Applicable when the sample sizes are less than 30.8 Standard deviation (`) 17.4 Average amount spent (`) 80 96 Assume that the two populations are normally distributed with unknown and unequal variances.2. 2  S1 S2  + 2   n1 n2   n = 2 2 2 (S1 /n1 ) (S / n )2 + 2 2 n1  1 n2  1 2 (1. . 76 =  3.3 Dependent or Correlated Samples Here we have the same sample before and after some treatment (test) has been applied.n From t-table.025. which means that female shoppers spend more than male shoppers. The hypotheses is H0 : md = 0 H1 : md ¹ 0 The test statistic is t0 = d Sd / n with n – 1 degrees of freedom (1. Reject H0 if | t 0 | > ta/2. That is. Such samples are called dependant or correlated samples.43 = 2.18 Applied Design of Experiments and Taguchi Methods SOLUTION: H0 : m1 – m2 = 0 H1 : m1 – m2 ¹ 0 It is a two tail test. Suppose we want to test the effect of a training program on a group of participants using some criteria. the same sample is being used before and after the treatment.7.017 Since | t0 | > t a/2.91 (S1 /n1 ) (S2 /n2 ) + + 24 19 n1  1 n2  1 Reject H0 if | t0 | > ta/2.4 + 25 20 2 2 =  16 4.n = t0. n–1 . The approximate p-value for this test is 2 ´ 0.66   n= = = = 43 2 2 2 2 150. ta/2.37)2 511. The test employed in such cases is called paired t-test.06 107.36 n1 + 2 S2 n2 2 2  S1 S2  + 2   n1 n2  (12. Thus.5 14.25 + 10. the difference between the two means is significant.5 11. We evaluate the group before the training program and also after the training program using the same criteria and then statistically test the effect.001 = 0. That is.025.002. The procedure is to take the differences between the first and second observation on the same sample (person or part) and the mean difference (md) is tested. The area in each tail = The test statistic is t0 = a/2 = 0. X1  X 2 2 S1 = 80  96 17. n we reject H0. 1.20) where d is the average difference of n pairs of data and Sd is standard deviation of these differences. we will have n pairs of data. H0 : md = 0 H1 : md ¹ 0 The test statistic is (Eq.Review of Statistics 19 ILLUSTRATION 1. we have to use paired t test. TABLE 1. n–1 Product Fixture 1 Fixture 2 Difference (d) d2 1 23 21 2 4 2 26 24 2 4 3 19 23 –4 16 4 24 25 –1 1 5 27 24 3 9 6 22 28 –6 36 7 20 24 –4 16 8 18 23 –5 25 9 21 19 2 4 10 25 22 3 9 The values of d and Sd are computed as follows: Sd = –8.9 Two types of assembly fixtures have been developed for assembling a product.9 27 24 22 28 20 24 18 23 21 19 25 22 Test at 5% level of significance whether the mean times taken to assemble a product are different for the two types of fixtures.3. So. Ten assembly workers were selected randomly and were asked to use these two fixtures to assemble the products. The assembly time taken for each worker on these two fixtures for one product is given in Table 1.61 .80 n 10 Sd =  d2  (  d )2 n = 124  9 n  1 (  8)2 10 = 3. Hence the samples are dependent.20) t0 = d Sd / n with n – 1 degrees of freedom Reject H0 if | t0 | > ta/2.3 Fixture 1 Fixture 2 23 21 26 24 19 23 24 25 Illustration 1. SOLUTION: Here. each worker assembles the same product using both the fixtures. Sd2 = 124 and n = 10 d = 8 d =  = –0. 1. Use (ii) Construct a 99% confidence interval on the mean life of bulbs.32.20. 225.12.20 . there is no significant difference between the assembly times obtained by both fixtures. 0. 950. (i) State the appropriate hypotheses for this experiment (ii) Test the hypotheses. 215. 250. (i) Test the hypothesis that the output voltage from both the brands is same.05. 0. 0. 240.7 From table ta/2. 900.05.16.20 Applied Design of Experiments and Taguchi Methods t0 = d Sd / n =  0. 1.05.32. 700. 880 (i) Test the hypothesis that the mean life of bulbs exceed 850 hours. 0. From a random sample of 49 bricks it is found that the average breaking strength is 75 kg/cm 2 with a standard deviation of 5 kg/cm2 .5 The output voltage measured from two brands of compressors A and B is as follows.61/ 10 =  0.20. 800. 800. Use a = 0.1 A random sample of 16 observations taken from a population that is normally distributed produced a mean of 812 with a standard deviation of 25. Ten bulbs are selected randomly and tested and the following results are obtained.n–1 = t0. 222.15. 0. 220. That is.80 3.6 The percentage shrinkage of two different types of casings selected randomly is given below: Type 1: 0.12. we do not reject the null hypothesis. 230.20. 230. 0. Use a = 0. 0.09. 0. 225.24. 0. (ii) Construct a 95% confidence interval on the difference in the mean output voltage.14. 1. 690.n–1. it is found that the mean breaking strength is 80 kg/cm2 . 220.262 Since | t0 | < ta/2. 250.18.20 Type 2: 0. The samples were selected randomly. 245 Brand B: 220. 1.142 =  0. 0. 1. 225. 0.22. 890.28. (ii) Verify your decision using p-value. 245. Test the hypothesis that the true mean is 800. 0. Use a = 0. What is your decision? (iii) Find the p-value for the test.025.3 The desired breaking strength of clay bricks is 85 kg/cm2. 0. 240 Assume that the output voltage follows normal distribution and has equal variances. From past experience it is known that the standard deviation of tensile strength is 9 kg/cm2. 0. Hence any one fixture can be used. 0. 0. From a random sample of 25 bricks.9 = 2.05. (i) State the appropriate hypotheses for this experiment and test it.26. PROBLEMS 1.2 The breaking strength of clay bricks is required to be at least 90 kg/cm2.12.10. 1. a= 0. 670.8 1. 230. 0. 240. 0.4 The life of an electric bulb (hours) is of interest. 850. 220. Brand A: 230. 1. 700. (ii) Construct 95% confidence interval on mean difference in lives of the bulbs. 750. .05. 1.05. 950 (i) Test the hypothesis that the lives of both the brands is same. After the training programme he administered the same type of test on the same participants. 790. 900. Before giving the training. Use a = 0. Before: After: 81 75 75 73 89 87 91 85 110 90 70 65 90 80 Using the 5% significance level. 700. he selected a random sample of 10 participants and administered a test. 900.8 A training manager wanted to investigate the effect of a specific training programme on a group of participants. 1. 850. (i) Test the hypothesis that Type 1 casing produces more shrinkage than Type 2 casing. determine whether the weight reduction exercise has any effect. 800. 750.9 A gym claims that their 15 week weight reduction exercise will significantly reduce weight. 950. The test scores obtained before and after the training are given below: Before training: After training: 69 56 67 70 55 62 43 72 77 71 46 50 75 68 52 72 43 68 65 68 Test whether the training programme has any effect on the participants.Review of Statistics 21 Assume that the shrinkage follows normal distribution and has unequal variances. 800. 750. Use a = 0. 800 Brand Y: 950. 900. 850. 790. 900.7 The life of two brands of electric bulbs of same wattage measured in hours is given below: Brand X: 800. (ii) Find 90% confidence interval on the mean difference in shrinkage. The table below gives the weight of 7 male adults selected randomly from a group undergoing the exercise. insurance. 2.1 Conventional Test Strategies One-factor experiments: The most common test plan is to evaluate the effect of one parameter on product performance. The system may be a simple/complex product or process. etc. Experimentation can be used for developing new products/processes as well as for improving the quality of existing products/processes. banking. biology or physical sciences. a test is run at two different conditions of that 22 . A product can be developed in engineering. It covers a wide range of applications from agriculture. The traditional approach in the industrial and the scientific investigation is to employ trial and error methods to verify and validate the theories that may be advanced to explain some observed phenomenon. biological. A process can be a manufacturing process or service process such as health care. These approaches are explained in the following sections. 2. In any experimentation the investigator tries to find out the effect of input variables on the output /performance of the product/process.C H A P T E R 2 Fundamentals of Experimental Design 2. social science. manufacturing and service sectors. This enables the investigator to determine the optimum settings for the input variables. and make connections between the conclusions from the analysis and the original objectives of the investigation.2. In this approach. Some of the approaches also include one factor at a time experimentation. the methods can be applied to all other disciplines also. Although the major emphasis in this book is on engineering products/processes. analyse the data.1 INTRODUCTION Experimentation is one of the most common activities performed by everyone including scientists and engineers. This may lead to prolonged experimentation and without good results. etc.2 EXPERIMENTATION Experimental design is a body of knowledge and techniques that assist the experimenter to conduct experiment economically. Experimentation is used to understand and/or improve a system. several factors one at a time and several factors all at the same time. Fundamentals of Experimental Design 23 parameter (Table 2. it is attributed to factor A. If there is any change in the result (between Y1 and Y2).1 Trial 1 2 One factor at a time experiment Test result * * * * Test average Y1 Y2 Factor level A1 A2 Several factors one at a time: Suppose we have four factors (A. TABLE 2.2). Suppose we have several factors including factor A. In this strategy we keep all other factors constant. This is the traditional scientific approach to experimentation. each factor level is changed one at a time.1). If there TABLE 2. TABLE 2. In this strategy. The first trial is conducted with factor A at first level and the second trial is with level two. This is considered to be base level experiment. In the second trial. C. keeping all the other factors constant.3). some other factors would be tested.2 Trial A 1 2 3 4 5 1 2 1 1 1 Several factors one at a time strategy Test result * * * * * * * * * * Test average Y1 Y2 Y3 Y4 Y5 Factors B C D 1 1 2 1 1 1 1 1 2 1 1 1 1 1 2 Several factors all at the same time: The third and most urgent situation finds the experimenter changing several things all at the same time with a hope that at least one of these changes will improve the situation sufficiently (Table 2. B. and determine the effect of one factor (say A). D). In the third trial. the first trial is conducted with all factors at their first level. If the first factor chosen fails to produce the expected result. the combined effect due to any two/more factors cannot be determined. the level of one factor is changed to its second level (factor A). the level of factor B is changed and the test average Y3 is compared with Y1 to determine the effect of B.3 Trial A 1 2 1 2 Several factors all at the same time strategy Factors B 1 2 C 1 2 D 1 2 * * * * * * Y1 Y2 Test result Test average . Thus. The second trial is conducted with all factors at their second level. If there is any change in the result (difference in the average between Y1 and Y2). In this strategy. the first trial is conducted with all factors at first level. Its result (Y2) is compared with the base level experiment (Y1). we attribute that to the factor A. In this strategy (Table 2. there is a need for scientifically designed experiments.3 Efficient Test Strategies Statisticians have developed more efficient test plans. factor A does not influence the estimate of the effect of factor B and vice versa. These are poor experimental strategies and there is no scientific basis. one can see that the full factorial experiment is orthogonal. This is actually a one-sixteenth fractional factorial design.2. not all. The statistically designed experiment permits simultaneous consideration of all . Because of this balanced arrangement. called Orthogonal Arrays (OAs) which can be utilized under various situations. both factor and interaction effects can be estimated. These designs use only a portion of the total possible combinations to estimate the main factor effects and some. 2. The results cannot be validated. It is not at all a useful test strategy. One such design is an L8 OA.3 NEED FOR STATISTICALLY DESIGNED EXPERIMENTS The main reason for designing the experiment statistically is to obtain unambiguous results at a minimum cost. of the interactions. The effect of interaction between factors cannot be studied. 2. Taguchi developed a family of fractional factorial experimental matrices.4. TABLE 2. A full factorial experiment is acceptable when only a few factors are to be investigated. with only 8 of the possible 128 treatment combinations.2. 2.4 Trial A 1 2 3 4 1 1 2 2 Full factorial experiment Factors and factor levels B 1 2 1 2 Response * * * * * * * * Here.24 Applied Design of Experiments and Taguchi Methods is any change in the test average between Y1 and Y2. it is not possible to attribute this change in result to any of the factor/s. The treatment conditions are chosen to maintain the orthogonality among the various factors and interactions. When several factors are to be investigated. There is an equal number of test data points under each level of each factor. which are referred to as fractional factorial experiments. Using this information.2 Better Test Strategies A full factorial design with two factors A and B each with two levels will appear as given in Table 2. the number of experiments to be run under full factorial design is very large. Hence. For instance. It is a statistical method used to interpret experimented data and make decisions about the parameters under study. ANOVA is a method of partitioning total variation into accountable sources of variation in an experiment. summation can be made for degrees of freedom. in the case of multi-factor designs with random effects or mixed effects model. F-test F-test is used to test whether the effects due to the factors are significantly different or not. ANOVA table A summary of all analysis of variance computations are given in the ANOVA table. From a limited number of experiments the experimenter would be able to uncover the vital factors on which further trials would lead him to track down the most desirable combination of factors which will yield the expected results. . the British biologist.5. mean is estimated from all data and requires one degree of freedom (df) for that purpose. The basic equation of ANOVA is given by Total sum of squares = sum of squares due to factors + sum of squares due to error SSTotal = SSFactors + SSError Degrees of freedom A degree of freedom in a statistical sense is associated with each piece of information that is estimated from the data.Fundamentals of Experimental Design 25 possible variables/factors that are suspected to have a bearing on the problem under consideration and as such even if interaction effects exist. A typical format used for one factor is given in Table 2. The statistical principles employed in the design and analysis of experimental results assure impartial evaluation of the effects on an objective basis. However. The statistical concepts used in the design form the basis for statistically validating the results from the experiments. It is also referred to as Mean Square (MS). Similar to the total sum of squares. 2. a valid evaluation of the main factors can be made. Total df = df associated with factors + error df Variance The variance (V) of any factor/component is given by its sum of squares divided by its degrees of freedom. Another way to think the concept of degree of freedom is to allow 1df for each fair (independent) comparison that can be made in the data. F-value is computed by dividing the factor variance/mean square with error variance/mean square.4 ANALYSIS OF VARIANCE The statistical foundations for design of experiments and the Analysis of Variance (ANOVA) was first introduced by Sir Ronald A. Fisher. the denominator for computing F-value shall be determined by computing expected mean squares. 2. The allocation of experimental units (samples) for conducting the experiment as well as the order of experimentation .5 BASIC PRINCIPLES OF DESIGN The purpose of statistically designed experiments is to collect appropriate data which shall be analysed by statistical methods resulting in valid and objective conclusions. Replications also permit the experimenter to obtain a precise estimate of the effect of a factor studied in the experiment. This error estimate (error variance) is used for testing statistically the observed difference in the experimental data.26 Applied Design of Experiments and Taguchi Methods TABLE 2. animal. and 2. machine. etc. person. it is to be noted that replication is not a repeated measurement.5. 2. These five measurements are five replicates.2 Randomization The use of statistical methods requires randomization in any experiment. The statistical analysis of the data. An experimental unit may be a material. Finally.5. The purpose of replication is to obtain the magnitude of experimental error. The following basic principles are used in planning/designing an experiment.5 Source of variation Factor Error Total where N SSF K SSe F0 VF Ve = = = = = = = Analysis of variance computations (ANOVA) Degrees of freedom K– 1 N – K N– 1 Mean square/ variance VF = SSF/ K – 1 Ve = SSe/N – K F0 V F/ V e Sum of squares SSF SSe SSTotal total number of observations sum of squares of factor number of levels of the factor sum of squares of error computed value of F variance of factor variance of error 2. The two aspects of experimental problem are as follows: 1.1 Replication Replication involves the repetition of the experiment and obtaining the response from the same experimental set up once again on different experimental unit (samples). The design of the experiment. Suppose in an experiment five hardness measurements are obtained on five samples of a particular material using the same tip for making the indent. Levels of a factor: The values of a factor/independent variable being examined in an experiment. replication and blocking are part of all experiments. etc. If the factor is an attribute. Experimental error: It is the variation in response when the same experiment is repeated. these three values are the three levels of the factor temperature.3 Blocking Blocking is a design technique used to improve the precision of the experiment. All input variables which affect the output of a system are factors.) or quantitative (temperature. They can be controlled at fixed levels. an experiment conducted using temperature T1 and pressure P1 would constitute one treatment. . Effect: Effect of a factor is the change in response due to change in the level of the factor.). the range is divided into required number of levels. For example. 2. animal. number of defectives. 2. plant. For example. 1100°C and 1200°C. each level of the factor is a treatment. In the case of single factor experiment. Factors are varied in the experiment. the factor temperature ranges from 1000 to 1200°C and it is to be studied at three values say 1000°C. appropriate statistical design methods shall be used to tackle restriction on randomization. Experimental unit: Facility with which an experimental trial is conducted such as samples of material. A block is a portion of the experimental material that should be more homogeneous than the entire set of material or a block is a set of more or less homogeneous experimental conditions. randomization. It is estimated as the residual variation after the effects have been removed. More about blocking will be discussed in factorial designs. setting of a switch on or off are the two levels of the factor switch setting. They can be qualitative (type of material. tensile strength. pressure. each of its state is a level. These are also called independent variables. surface finish. When complete randomization is not possible. For example. etc. Response: The result/output obtained from a trial of an experiment. etc. The three basic principles of experimental design.6 TERMINOLOGY USED IN DESIGN OF EXPERIMENTS Factor: A variable or attribute which influences or is suspected of influencing the characteristic being investigated. Examples are yield.5. Statistical methods require that the observations (or errors) are independently distributed random variables. This is also called dependent variable. type of tool. The levels can be fixed or random. caused by conditions not controlled in the experiment. It also assists in averaging out the effects of extraneous factors that may be present during experimentation.Fundamentals of Experimental Design 27 should be random. etc. They can be varied or set at levels of our interest. It meets this requirement. person. Treatment: One set of levels of all factors employed in a given experimental trial. It is also a restriction on complete randomization. If the factor is a variable. Blocking is used to reduce or eliminate the effect of nuisance factors or noise factors. The insight into the problem leads to a better design of the experiment. blister. 7. We have to specify clearly whether the problem is less force or more force or both. problem selection should be given importance. whenever there is problem of inferior process or product performance. The following seven steps procedure may be followed: 1. levels and ranges Selection of response variable Choice of experimental design Conducting the experiment Analysis of data Conclusions and recommendations 2. The following methods may be used: (i) Brainstorming (ii) Flowcharting (especially for processes) (iii) Cause-effect diagrams . design of experiments would be handy. 2. for example to determine the optimum operating conditions for a process to evaluate the relative effects on product performance of different sources of variability or to determine whether a new process is superior to the existing one. etc. The experiment will be planned properly to collect data in order to apply the statistical methods to obtain valid and objective conclusions.7.7. all the traditional quality tools will be tried. Identification of these factors leads to the selection of experimental design. the customer expectations should also be kept in mind. A clear definition of the problem often contributes to better understanding of the phenomenon being studied and the final solution of the problem. A few examples of specifying a problem are given below: (i) If there is a rejection in bore diameter. These factors include the design factors which can be controlled during experimentation. The objective of the experiment may be. The various questions to be answered by the experimenter and the expected objectives may also be formulated. to improve the performance further.28 Applied Design of Experiments and Taguchi Methods 2. Since experimentation takes longer time and may disturb the regular production. 4..1 Problem Statement Initially. we should also specify whether the problem is observed as random phenomenon on the product or concentrated to one specific area. 5. Still if the problem persists.2 Selection of Factors. (iii) If the problem is a defect like crack. Problem statement Selection of factors. Problem should be clearly defined along with its history if any. (ii) Suppose there is a variation in the compression force of a shock absorber. we have to be very specific. we need to specify whether the rejection is due to oversize or undersize or both. 6. Here.7 STEPS IN EXPERIMENTATION The experimenter must clearly define the purpose and scope of the experiment. When we state the problem. 3. Levels and Ranges There may be several factors which may influence the performance of a process or product. 2. . Otherwise the results will be misleading. Over the operating range of the variable which of the values to be considered as the levels has to be carefully judged. the number of levels for each factor is usually decided. etc. two are recommended to minimize the size of the beginning experiment. For visual and sensory characteristics. Choice of design involves the consideration of the number of factors and their levels.4 Choice of Experimental Design Several experimental designs are available like factorial designs (single factor and multifactor). The choice of levels for quantitative factors is often critical. The capability of the measuring devices must also be ensured. In preliminary experimentation. the dependent variables to be evaluated. time to complete one replication. A clear statement is required of the response to be studied. In order to estimate factor effect. 2. etc.7. resources required such as sample size (number of replications). randomization restrictions applicability of blocking. this requires the process knowledge and past experience. Usually. the experience and the specialized knowledge of engineers and scientists dominate the selection of factors. In doing so one may miss the range that might give best response. % elongation. 2. Frequently there may be a number of such response variables. surface finish. i. In selecting the design . more number of levels is considered. Selecting the right type of design for the problem under study is very important. And it is preferable to include the current operating value as one of the levels in experimentation.Fundamentals of Experimental Design 29 Often. Selection of the extreme (within the operating range) is always safe because the system would function safely. However. It is important that standard procedure for measuring such variables should be established and documented. confounding designs. In detailed experimentation with a few factors. The range over which each factor to be studied and the values for each level are fixed taking into account the working range of the process. etc. fractional factorial designs.e. one may not see any difference and may miss something important outside the range of experimentation. if levels are chosen too close together. Initial experiments will eliminate many factors from contention and the few remaining can then be investigated with multiple levels. % defectives. it is particularly important that standards be developed to remove subjectivity in judgments made by different observers of different times. Initial rounds of experimentation should involve many factors at few levels. Sometimes ‘on’ and ‘off’ or ‘yes’ and ‘no’ can also be the two qualitative levels of a factor. leakages. Depending on the availability of resources and time for experimentation. tensile strength. Only thing is that the level must be practical and should give valid data. a minimum of two levels should be used. the number of levels is limited to two.3 Selection of Response Variable The response variable selected must be appropriate to the problem under study and provide useful information. Selection of levels for qualitative variables is not difficult.7. for example. where large number of factors is included in the study. availability of experimental units (samples). So it is always better to include experts and operators who are knowledgeable in the process to suggest levels for study. where one factor may be very difficult or expensive to change the test set up for. the experimenter must interpret the results from the practical point of view and recommend the same for possible implementation. but once that trial is selected all the repetitions are tested for that trial. Complete randomization within blocks is used. The different methods of randomization affect error variance in different ways. complete randomization is recommended. If factor A were difficult to change. the experiment could be completed in two blocks. it is important to monitor the process to ensure that everything is being carried out as per the plan. a trial can be selected.7 Conclusions and Recommendations After the data are analysed using the statistical methods. In industrial experiments.7. Statistical analysis of data is a must for academic and scientific purpose. 2. This method is used when test set up change is easy and inexpensive. but others are very easy. Only the analysis is concerned with the interaction columns. This method is used if test set ups are very difficult or expensive to change.7. This may cause certain factors in ANOVA to be significant when in fact they are not.7. Randomization may be complete. It is always better to conduct conformation tests using the recommend levels for the factors to validate the conclusions. Confidence interval estimation is also part of the data analysis. Residual analysis and model adequacy checking are also part of the data analysis procedure. the graphical analysis and normal probability plot of the effects may be preferred. statistical methods should be used to analyse the data so that results and conclusions are objective. 2. Analysis of variance is widely used to test the statistical significance of the effects through F-Test. The allocation of experimental units as well as the order of experimentation of the experimental trials should be random. In this book some important designs widely applied are discussed. Using random numbers. Simple repetition means that any trial has an equal chance of being selected for the first test. All A1 trials could be selected randomly and then all A2 trials could be randomly selected. Complete randomization means any trial has an equal chance of being selected for the first test. However. This will even out the effect of unknown and controlled factors that may vary during the experimentation period. 2. Often an empirical model is developed relating the dependent (response) and independent variables (factors). trial to trial variation is large and repetition variation is less. When conducting the experiment.5 Conducting the Experiment It is to be noted that the test sheets should show only the main factor levels required for each trial.30 Applied Design of Experiments and Taguchi Methods it is important to keep the objectives in mind. simple repetition or complete within blocks.6 Analysis of Data Several methods are available for analysing the data collected from experiment. The recommendations include the settings or levels for all the factors (input variables studied) that optimizes the output (response). . Hence. In simple repetition. 8 CHOICE OF SAMPLE SIZE 2. An advantage of half-normal plot is that all the large estimated effects appear in the right corner and fall away from the line. 2. in a study on defective parts. it is inferred that the residuals/errors follow normal distribution.8. …. Often the number of replicates is chosen arbitrarily based on historical choices or guidelines.9. Also normal probability plot of effects is used to judge the significance of effects. The sample size should be as large as possible so that it meets the statistical principles and also facilitate a better estimate of the effect. The absolute value of normal variable is half-normal. The sample size (the number of replications) depends on the resources available (the experimental units) and the time required for conduct of the experiment. The mean is estimated as the .…. 2. Suppose the past per cent defective in a process is 5%.2 Attribute Data Attribute data provides less discrimination than variable data. Because of this reduced discrimination. This method is often used by practitioners in the industry. Alternatively we can use half-normal probability plot of residuals and effects. For example. If all the residuals fall along a straight line drawn through the plotted points.9 NORMAL PROBABILITY PLOT One of the assumptions in all the statistical models used in design of experiments is that the experimental errors (residuals) are normally distributed. the measure of how bad is not indicated. we should obtain at least a total of 20 defectives.Fundamentals of Experimental Design 31 2. A general guideline is that the class (occurrence or non-occurrence) with the least frequency should have at least a count of 20.1 Variable Data The selection of sample size (number of repetitions) is important. And the ordered data (Xj) is plotted against their observed cumulative frequency (j – 0. More than one test per trial increases the sensitivity of experiment to detect small changes in averages of populations. Then the total sample size for the whole study (all trials/ runs) shall be 400 to obtain an expected number of 20 defectives. If all the data points fall approximately along a straight line. when an item is classified as bad (defective). That is. X2.8. The effects which fall away from the straight line are judged as significant effects. it is concluded that the data (variable) follows normal distribution.1 Normal Probability Plotting The sample data is arranged in ascending order of the value of the variable (X1. This normality assumption can be verified by plotting the residuals on a normal probability paper. 2. Xj. A minimum of one test result for each trial is required.5)/n on the normal probability paper. many pieces of attribute data are required. Xn). Sometimes normal probability plots misleads in identifying the real significant effects which is avoided in halfnormal probability plots. 200.64 .67 1.67 –0.1 above. Normal probability plot of data The following data represent the hardness of 10 samples of certain alloy steel measured after a heat treatment process. 261.04 –0. 276.9.45 0. the effects which fall away from the straight line are concluded as significant effects. 258. 223. 235.75 0.95 Zj –1. TABLE 2. The data arranged in the ascending order of value is shown in the second column of Table 2.04 1. If the factor effects are plotted on the normal probability paper.85 0. the data corresponding to those residuals which fall away from the straight line are called outliers and such data shall not be considered.65 0. ….5)/10 0. 2. 3.05 0. The first column indicates the rank of the ordered data.6. 237 Table 2. If residuals are plotted on the normal probability paper.13 0.13 0.35 0.55 0. Xn ). Third column gives the cumulative frequency of the ordered data.32 Applied Design of Experiments and Taguchi Methods 50th percentile value on the probability plot.1) j = 1.15 0.64 –1.39 –0.6 shows the computations required for normal probability plot. And the standard deviation is given by the difference between 84th percentile and the 50th percentile value. X2.6 Computations for normal probability plot j 1 2 3 4 5 6 7 8 9 10 Xj 195 223 228 235 237 248 256 263 272 290 (j – 0. Compute the cumulative frequency j  0.5 n Transform the cumulative frequency into a standardized normal score Zj  j  0. 228.5  Zj = G  n   (2. n (2.2) Plot Zj vs Xj The interpretation of this plot is same as explained in Section 2.25 0. 2.9.39 0. How to obtain a normal probability plot on a normal probability paper is illustrated below: 275. The last column (Zj) is the standardized normal score. ….2 Normal Probability Plot on an Ordinary Graph Paper Arrange the sample data in ascending order of the value of the variable (X1. 265. This is a plot of the absolute values of the effects against their cumulative normal probability.7 0.1625 0.2375 d = –1.3 Half-normal Probability Plotting The half-normal probability plot is generally used for plotting the factor effects. From Figure 2.1).3375 abcd = 0.2875 b = 0.7875 ab ac ad bd cd = = = = = 0.5 0. The cumulative normal probability is plotted on a normal probability paper against the effects. This is left as an exercise to the reader.01 150 200 250 Hardness 300 350 FIGURE 2.4375 0. 2.5375 The computations required for normal probability plot of the above effects are given in Table 2.4125 acd = 0.5625 – 0.2 shows the normal probability plot of effects. The effects corresponding to the points which fall away from the straight line are considered to be significant.1 Normal probability plot of data.3625 bc = – 0.95 0.7.05 Probability 0.5875 – 0.2125 bcd = – 0.5)/10 (Y-axis) on a normal probability paper is the normal probability plot of the data (Figure 2.3 0.8 0.1 0.4 0. 0.6 0. a = – 0. This plot is particularly .Fundamentals of Experimental Design 33 The plot Xj (X-axis) versus (j – 0. We can also plot Xj (X-axis) versus Zj (Y-axis) on an ordinary graph sheet to obtain the same result.5875 abd = – 0.2 we observe that two points fall away from the straight line drawn through the points.99 0.7625 c = 0.2 0.9.6125 abc = 0. Figure 2. Normal probability plot of effects Suppose we have the following factor effects from an experiment.9 0. 34 Applied Design of Experiments and Taguchi Methods TABLE 2.9666 95 90 Per cent (%) 80 70 60 50 40 30 20 10 5 1 –2 –1 0 Effect 1 2 FIGURE 2.5625 abc = 0.4375 abcd = 0.7000 0.5375 ad = 0.7625 Normal probability (j – 0.1625 acd = 0.4125 bcd = –0.3000 0.6125 bd = –0.3666 0.7875 cd = –0.0333 0.5875 abd = –0.9000 0.7 j 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 99 Computations for normal probability plot of effects Effects (X j ) d = –1.2125 c = 0.7666 0.2875 ac = 0.5)/15 0.3625 bc = –0.8333 0.2375 ab = 0.5666 0.6333 0.2 Normal plot of the effects.3375 a = –0.4333 0.1666 0. .5000 0.5875 b = 0.1000 0.2333 0. 2. This probability is plotted on a half-normal probability paper.5000 0. 2. 3. The required computations are given in Table 2.7).8.5   n    j = 1.2375 0.3000 0. We consider the absolute effects and arrange them in the increasing order and the observed cumulative frequency (j – 0.5375 0.1000 0.5 + 0. This probability can be converted into a standardized normal score Zj and plot on an ordinary graph paper.9666 . Here we consider the absolute effects and arrange them in increasing order and the normal probability is computed. From this half-normal plot it is seen that TABLE 2. n (2.3625 Normal probability (j – 0. The effects which fall away from the straight line are considered to be significant effects.4333 0.8333 0. ….2875 0.5875 0. The advantage of half-normal plot is that all the large estimated effects appear in the right corner and fall away from the line. n (2.7666 0. 3.5   n   j = 1.5   Z j = G 0.1666 0.5875 0.5 + 0.5  Probability = 0.5625 0.8 j 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Computations for half-normal probability plot Effects | Xj | 0.3) on a normal probability paper.1625 0. Half-normal probability plot of effects Consider the same effects used for normal plot (Table 2.6125 0.7000 0.Fundamentals of Experimental Design 35 useful when there are only few effects such as an eight run design.4125 0.4375 0.4) This is left as an exercise to the reader. Figure 2.9000 0.2333 0.7875 1.7625 0. Alternately the ordered effects (Table 2.0333 0.3 shows the half-normal plot of the effects.2125 0.3375 0.5666 0.3666 0.5)/16 0. …. where   j  0.7) are plotted against their observed cumulative  j  0.6333 0.5)/ n is plotted on a half-normal probability paper. Thus. 2. 2. It can help to analyse cause and effect relationships and used in the conjunction with brainstorming. An atmosphere is created such that everybody feels free to express themselves.36 Applied Design of Experiments and Taguchi Methods 98 95 Per cent (%) 90 85 80 70 60 50 40 30 20 10 Absolute effect FIGURE 2.11 CAUSE AND EFFECT ANALYSIS Cause and effect analysis is a technique for identifying the most probable causes (potential cause) affecting a problem. The tool used is called cause and effect diagram.10 BRAINSTORMING Brainstorming is an activity which promotes group participation and team work. there is a marked difference between normal and half-normal plot. . Each input and contribution is recognized as important and output of the whole session is in the context. The continuing involvement of each participant is assured and the groups future is reinforced by mapping out the exact following actions (analysis and evaluation of ideas) and the future progress of the project. All ideas from each participant are recorded and made visible to all the participants. encourages creative thinking and stimulates the generation of as many ideas as possible in a short period of the time. only one point (effect) lies far away from the straight line which is considered to be significant. The participants in a brainstorming session are those who have the required domain knowledge and experience in the area associated with the problem under study. It is recommended to use half-normal plot to identify significant effects.3 Half-normal plot of the effects. The effect is placed in a box on the right and a long process line is drawn pointing to the box (Figure 2. etc. The effect may be process rejections or inferior performance of a product.4 Cause and effect diagram. etc.5) . machine (equipment).12 LINEAR REGRESSION When experiments are conducted involving only quantitative factors like temperature.4). If the study involves only one dependent variable (response). The experimenter. around which other associated causes are clustered. Usually. Each one of these major causes can be viewed as an effect in its own right with its own process line. If the relation between Y and X is linear. If the response is related linearly with more than one independent variable. 2. concentration of a chemical. These minor causes are identified through brainstorming. through discussion and process of elimination arrive at the most likely causes (factors) to be investigated in experimentation.. The cause and effect diagram is also known as Fish Bone diagram or Ishikawa diagram (named after Japanese professor Ishikawa). FIGURE 2. it is called multiple linear regression. pressure. one independent variable (factor) and if the relation between them is linear.12. method (procedure) and material. the major categories of causes recorded are due to person (operator).1 Simple Linear Regression Model Suppose the yield (Y) of a chemical process depends on the temperature (X). it is called simple linear regression. these are recorded on either side of the line within boxes connected with the main process line through other lines. After deciding the major categories of causes.Fundamentals of Experimental Design 37 Cause and effect diagram visually depicts a clear picture of the possible causes of a particular effect. 2. the model that describes this relationship is Y = b 0 + b 1X + e (2. often the experimenter would like to relate the output (response) with the input variables (factors) in order to predict the output or optimize the process. (2. Applying the method of least squares we get the following two normal equations which can be solved for the parameters. If we have n pairs of data (X1.12) (2.14) (2.15) SSX =  (Xi  X )2 =  Xi2  nX 2 SSY =  (Yi  Y )2 =  Yi2  nY 2 SS XY =  (Xi  X ) (Yi  Y ) =  XiYi  (  X i ) (  Yi )/n =  XiYi  nXY It is to be noted that SSX and SSY are the terms used to determine the variance of X and variance of Y respectively. Y1). (2.8) b1 =  XiYi  nX Y 2 2  Xi  nX (2.11) (2. normally independently distributed with mean zero and variance se2 .13) (2. The model with the sample data Eq.17) (2. (X2. and Y = n n From the normal Eqs. Y i = b0 + b1 X i + e i (2. (2. Yn).18) (2.5). the parameters b0 and b1 can also be expressed in terms of sums of squares as follows: where X = b1 = SS XY/ SSX b0 = Y  b1 X (2.7) i 1 n i 1 2  X iYi = b0  Xi + b1  X i i 1 i 1 (2.38 Applied Design of Experiments and Taguchi Methods where b0 and b1 are the constants called regression coefficients and e is a random error. that is fitted to the data.8).10)  Xi  Yi .20) . Y2). we can estimate the parameters by the method of least squares.16) (2. SSX and SSY are called corrected sum of squares. … (Xn.19) (2. NID (0.7) and (2. And b1 and b0 are estimated as follows: b1 = SSXY/SSX b0 = Y  C1 X and Y = b0 + b1 X is the regression equation Variance of X ( S2 X) = SSX/(n – 1) and Variance of Y(S2 Y) = SS Y/( n – 1) (2. s 2) b0 is also called the intercept and b1 is the slope of the line.6) where b0 and b1 are the estimates of b0 and b1 respectively.6) is of the same form as above Eq.  Yi = nb0 + b1  X i i 1 n n n n (2.9) b0 = Y  b1 X (2. 24) SSR = b1SSXY The ANOVA computations for simple linear regression are given in Table 2.23) (2.1.n–2. The ANOVA equation for linear regression is Total corrected sum of squares = sum of squares due to regression + sum of squares due to error SSTotal = SSR + SSe These sum of squares are computed as follows: Let CF (correction factor) = (Grand total)2 Total number of observations (  Yi ) 2 n =  Yi2  CF 2  ei = n  2 SSe dfe (2. Rejection of H 0 implies that there is significant relationship between the variable X and Y. we can write error sum of squares (SSe) for the regression as SSe =  ei2 and standard error as Se = where. The appropriate hypotheses are H0 : b1 = 0 H 1: b 1 ¹ 0 Reject H0 if F0 exceeds Fa.Fundamentals of Experimental Design 39 Similarly. dfe are error degrees of freedom. TABLE 2.9.22) SSTotal =  Yi2  (2.21) (2.9 ANOVA for simple linear regression Source Regression Error Total Sum of squares SSR = b1 SSXY SSe = SSTotal – b1SSXY SSTotal Degree of freedom 1 n – 2 n – 1 Mean square MSR = SSR /1 MSe = SSe /n – 2 F MSR/MSe Test for significance of regression is to determine if there is a linear relationship between X and Y. we can also test the coefficients b0 and b1 using one sample t test with n – 2 degrees of freedom. The hypotheses are and H 0: b 0 = 0 H 1: b 0 ¹ 0 H 0: b 1 = 0 H 1: b 1 ¹ 0 . Hence. the profit depends on investment on R&D. Investment in R&D (` in millions): 2 3 5 4 11 5 Annual profit (` in lakhs): 20 25 34 30 40 31 (i) Develop a simple linear regression model to the data and estimate the profit when the investment is 7 million rupees. (iii) Test significance of b1 .n–2 C0 MSe (1/n) + ( X 2 /SSX ) (2.1 XY 40 75 170 120 440 155 1000 X2 4 9 25 16 121 25 200 Y2 400 625 1156 900 1600 961 5642 X= 30  Xi = = 5.n–2 ILLUSTRATION 2.25) t0 = Reject H0.10 X 2 3 5 4 11 5 30 Y 20 25 34 30 40 31 180 Computations for Illustration 2. TABLE 2.0 n 6 Y= . (ii) Test the significance of regression using F-test. if |t0 | > t a/2.1 C1 MSe /SSX (2. They have collected the following data from their company records. SOLUTION: In this problem. if |t0 | > t a/2.0 n 6 180 Y i = = 30.26) Simple Linear Regression A software company wants to find out whether their profit is related to the investment made in their research and development.40 Applied Design of Experiments and Taguchi Methods The test statistics are t0 = Reject H0.10. X = Investment in R&D and Y = Profit The summation values of various terms needed for regression are given in Table 2. 11 Source Regression Error Total ANOVA for Illustration 2. we have X1.0 Since F5%.1 (Simple Linear Regression) Degee of freedom 1 4 5 Mean square 200 10. ….Fundamentals of Experimental Design 41 SS X =  (Xi  X ) =  X i2  nX 2 = 200 – 6(5)2 = 50 SSY =  (Yi  Y ) =  Yi2  nY 2 = 5642 – 6(30)2 = 242 2 2 SSXY = ˆ = C 1 SXiYi – n XY = 1000 – 6(5 ´ 30) = 100 SS XY SS X = 100 50 = 2. 2.0 + 2.5 F 19.4 = 2.1.776. That is the relation between X and Y is significant.025.458 = 4.367 Since t0. Suppose. the regression coefficient b1 is significant. the relationship is modelled as multiple linear regression.5/50 = 2 0.0 Table 2. Xk independent variables (factors).0 – 200.0 ˆ = Y  C X = 30  (2  5) = 20.71.12.0 42. X2. A model that might describe the relationship is .0(100) = 200. (iii) The statistic to test the regression coefficient b1 is t0 = C1 MSe SSx = 2.0 10.11 gives ANOVA for Illustration 2.0X (i) When the investment is ` 7 million.0 242.0 C 0 1 The regression model Y = b0 + b1 X = 20. the profit Y = 20.0 SSR = b1SXY = 2. X3.4 = 7.05 Sum of squares 200. TABLE 2. regression is significant.0 = 42.1.0 + 2.0(7) = 34 millions (ii) SSTotal = SSY = 242.0 SSe = SSTotal – SSR = 242.2 Multiple Linear Regression Model In experiments if the response (Y) is linearly related to more than one independent variable. b5 = b12 (2. k are called regression coefficients. 1. Then the model is Y = b0 + b1X1 + b2X2 + b12X1 X2 + e If we let X3 = X1X2 and b3 = b12.27) may also be analysed by multiple linear regression. 2.42 Applied Design of Experiments and Taguchi Methods Y = b0 + b1X 1 + b2X2 + .31) The parameters can be estimated by deriving least square normal equations and matrix approach. (2.27) The parameters bj.27) Y = b0 + b1X1 + b2X2 + b3X3 + b4X4 + b5X5 + e 2 where X3 = X12. the X matrix and Y vector for n pairs of data are 1  1  X = 1   1  " X X X " X " X X X # # # # # " X X X X11 X12 21 31 22 32 n1 n2 1k   2k   3k     nk  n k +1 (2.34) . b4 = b22. ˆ = (XTX)–1 X TY C (2. (2.28) (2. Eq..33) Y1    Y2    Y = Y3     #   Yn   n 1  (2.. …. + bkXk + e (2. (2.29) (2.28) can be written as Y = b0 + b1X1 + b2X2 + b3X3 + e Similarly.32) where. Suppose we want to add an interaction term to a first order model in two variables. a second order model in two variables is 2 2 Y = b0 + b1X1 + b2X2 + b11X1 + b22X2 + b12X1X2 + e (2.30) can be modelled as a linear regression model Eq. where. Models those are more complex in appearance than Eq. X5 = X1X2 and b3 = b11. X4 = X2 . j = 0. ..Fundamentals of Experimental Design 43 C0     C1    C2  C =   C3     #    Ck   k +11  ˆ + C ˆ X i = 1. ….12 Source of variation Regression Error Total ANOVA for significance of regression Degree of freedom k n –k – 1 n– 1 Mean square MSR MSe F0 MSR/ MSe Sum of square SSR SSe SSTotal where. Xk.36) Hypothesis testing in multiple linear regression The test for significance of regression is to determine whether there is a linear relationship between the response variable Y and the regression variables X1..35) (2. SSTotal = YTY – n Y 2 SSR = bTX TY – n Y 2 SSe = SSTotal – SSR These computations are usually performed with regression software or any other statistical software.12). H0 : b1 = b2 = … = bk = 0 H1 : bj ¹ 0 for at least one j . n ˆ =C The fitted regression model. Y i 0 j ij j 1 k (2.n–k–1 TABLE 2. The test procedure involves ANOVA and F -test (Table 2. X2.k. if F0 exceeds Fa. .  SSR    MSR  k  F0 = = MSe  SSe     n  k 1  Reject H0. The software also reports other useful statistics. 2.. Rejection of H0 implies that at least one regression variable contributes to the model. 5 23.5 22. The data collected is given Table 2. X4 = JSDLS . ILLUSTRATION 2.75 Age 42 24 24 31 38 30 34 28 28 22 Data for Illustration 2.37) A large value of R2 does not necessarily imply that the regression model is a better one. we normally use computer software for obtaining solution. The addition of unnecessary terms to the model decreases the value of R2 adj.75 22. we prefer to use an adjusted R2 . R2 = SSR SSTotal  SSe  =1     SSTotal  (2.13 MALL 21. TABLE 2. 2 Radj =1  SSe /n  p SSTotal /n  1 =1  n  1 n p (1  R 2 ) (2. Adding a variable to the model will always increase the value of R2 . Hence.0 17.2 Height 174 174 166 159 160 162 165 167 164 161 Weight 73 60 58 49 61 52 46 46 39 47 JSDLS 51 58 55 44 49 52 48 48 35 40 When more than two independent variables are involved. X2 = Height.2 Multiple Linear Regression An experiment was conducted to determine the lifting capacity of industrial workers.00 20. The solution obtained from computer software is given as follows: Y = MALL(RESPONSE).5 22.5 20. In general R2 adj value will not increase as variables are added to the model. The worker characteristics such as age.13.5 23. regardless of whether this additional variable is statistically significant or not.44 Applied Design of Experiments and Taguchi Methods Coefficient of multiple determination (R2) R2 is a measure of the amount of variation explained by the model. X1 = Age.50 16. body weight. X3 = Weight. body height and Job Specific Dynamic Lift Strength (JSDLS) are used as independent variables and Maximum Acceptable Load of Lift (MALL) is the dependant variable.38) where n – p are the error degrees of freedom. 009 0.92 -0.0 38.17808 -0.059 -0.67 P 0.0 Y 21.0 24.000 Residual 0.500 23.03647 0.0 22.500 23.000 17.001 Standardized residual 1.348 0.00 df 4 5 9 MS 13.177 0.46 0.03 0.243 -0.02803 p 0.006 17.500 22.106 X1 – 0.749 The p-value from ANOVA indicates that the model is significant.356 0. .707 54.0365 X3 + 0.45 0. The R-Sq(adj) value (97.295 22.0 28.7% R-Sq(adj) = 97.141 F 94.750 ˆ Y SS 53.856 23.Fundamentals of Experimental Design 45 Statistical Analysis The regression equation is MALL = 28. From these figures it is inferred that the model is a good fit of the data.42693 Se of Coef 4.64 -1.073 20.500 22.950 23.500 16.02959 0.0 30.001 0.393 0.427 0. the regression coefficient for the variable WEIGHT is not significant (p-value = 0.84 -0.0 31.74 0.177).5 and 2.481 0.006 0.18 -1.000 R-Sq = 98.02324 0. However.50 1.691 22.178 X2 – 0.002 0.6 show the normal probability plot of standardized residuals and the plot of standardized residuals versus fitted values respectively for Illustration 2.757 20.0 28.357 21.2.6% Analysis of Variance Source Regression Error Total Residuals Obs 1 2 3 4 5 6 7 8 9 10 X1 42.500 20.7 + 0.143 0.0 34.10567 -0.0 24.766 16.266 -0.6%) shows that the fitted model is a good fit of the data.427 X4 Predictor Constant Age(X1) Height(X2) Weight(X3) JSDLS(X4) Coefficient 28.02544 0.732 0.000 20.450 0.00 21. Figures 2.205 -0.750 22. 2.6 Normal probability plot of standardized residuals for Illustration 2.46 2 Applied Design of Experiments and Taguchi Methods 1 Standardized residual 0 –1 –2 16 17 18 19 20 Fitted value 21 22 23 24 FIGURE 2.2.5 99 95 90 Per cent (%) 80 70 60 50 40 30 20 10 5 1 –3 Plot of standardized residuals vs. . –2 –1 0 1 Standardized residual 2 3 FIGURE 2. fitted value for Illustration 2. TABLE 2.5 1.4 An induction hardening process was studied with three parameters namely Power potential (P).5 1. 225. 235.14 show the number of births in the last eight years.14 Year: Births: 1 565 2 590 Data for Problem 2.5 (Contd. 215. 260. Scan speed (S) and Quenching flow rate (Q).15 Yield (tons) 100 94 89 95 101 97 88 2.) . 222.5 20 15 17. Estimate the mean and standard deviation from the plot.1 4 597 5 615 6 611 7 610 8 623 3 583 (i) Develop a simple linear regression model to the data for estimating the number of births.5 6. 248. 253. 2.15 shows the yield of sugar and the quantity of sugarcane crushed. (ii) Test the significance of regression using F-test (iii) Test significance of b1 2.5 2. 242. 257. The data in Table 2. TABLE 2. Develop a simple linear regression equation to estimate the yield.3 Data for Problem 2. Compare these values with the actual values computed from the data.5 6.16.5 6. 220.4 P 6.1 In a small town. 218. 250.5 6. a hospital is planning for future needs in its maternity ward.2 Quantity crushed (tons) 250 210 195 220 245 196 189 Obtain normal probability plot for the following data. 240. 255.0 S 15 17.Fundamentals of Experimental Design 47 PROBLEMS 2. The surface hardness in HRA was measured with Rockwell Hardness tester. The data collected is given in Table 2. 210.16 Hardness in HRA 80 80 77 77 82 Data for Problem 2. 232.2 The data in Table 2.0 2. Data: 230. 245. 245. TABLE 2. 212.5 Q 1. 00 21.23 26.16 Hardness in HRA 80 79 75 66 62 61 64 Data for Problem 2.57 23.17 Work load: (kg m/min) 10 20 30 Data for Problem 2. do we need all the four regressor variables? 2. (i) Plot the effects on a normal probability paper and identify significant effects if any.5 1.5 20 17.6 The following factor effects (Table 2. TABLE 2.03 18.5 (i) Fit a multiple regression model to these data.5 35 44 54 60 67 75 84 92 PV: (L/min) 10.5 8.5 7.7 17. (ii) Obtain half-normal plot of these effects and identify the significant effects.5 8. (iii) Compare these two plots and give your remarks.5 7.7 (i) Develop a simple linear regression model to the data for estimating the pulmonary ventilation (PV).2 15. (ii) Test for significance of regression.18) have been obtained from a study.5 2.45 12.8 22.5 The average of pulmonary ventilation (PV) obtained from a manual lifting experiment for different work loads is given in Table 2.5 S 15 2. (iii) Test the significance of b1. TABLE 2.18 (1) = 7 a = 9 b = 34 c = 16 d = 8 e = 8 ab = 55 ac = 20 ad = 10 bd = 32 abd = 50 cd = 18 acd = 21 bcd = 44 abcd = 61 bc = 40 Factor effects ae = 12 be = 35 abe = 52 ce = 15 ace = 22 bce = 45 abce = 65 abc = 60 de = 6 ade = 10 bde = 30 abde = 53 cde = 15 acde = 20 bcde = 41 abcde = 63 .39 24.5 8.5 1.59 19.4 (Contd.0 20 15 17.5 8.) P 7.5 2.17.48 Applied Design of Experiments and Taguchi Methods TABLE 2.5 1.5 1.5 Q 1. (ii) Test the significance of regression using F-test. What conclusions will you draw? (iii) Based on t -tests. 2. Each level of the factor considered to be treatment.. the associated statistical model is called fixed effects model. etc.. If the levels of a factor are qualitative (type of tool. 2. 2.C H A P T E R 3 · · · · · Single-factor Experiments 3.2.). it is called quantitative factor..). Some examples of a single-factor experiment are: Studying Effect of Effect of Effect of Effect of the effect of type of tool on surface finish of a machined part type of soil on yield type of training program on the performance of participants temperature on the process yield speed on the surface finish of a machined part If the levels are fixed. a Yij = m + Ti + eij     j = 1.1 INTRODUCTION In single-factor experiments only one factor is investigated... it is called qualitative factor. type of material. The levels of a factor can be fixed (selecting specific levels) or random (selecting randomly).1 The Statistical Model  i = 1.2 COMPLETELY RANDOMIZED DESIGN In a single-factor experiment if the order of experimentation as well as allocation of experimental units (samples) is completely random. n where. 3.. pressure. Yij = jth observation of the i th treatment/level ÿm = overall mean 49 (3. velocity. . The factor may be either qualitative or quantitative. . etc. If the levels of a factor are quantitative (temperature. it is called completely randomized design. 3.1) . 1) is a linear statistical model often called the effects model. Ti. Ti . The appropriate hypotheses are as follows: H0 : T1 = T2 = … = Ta = 0 H1 : Ti ¹ 0. Y2 n # T1...1.2) . Y2. Yan Total Ta. Y. is the grand total. at least for one i Here we are testing the equality of treatment means or testing that the treatment effects are zero. is the average of the ith treatment. is the grand average.. Also it is referred as one-way or single-factor Analysis of Variance (ANOVA) model. Yi. T.. there will be n observations under ith treatment.1 Treatment (level) 1 2 # Typical data for single-factor experiment Observations Total Average Y11 Y12 . The dot (. Y1n Y21 Y22 . TABLE 3. 3.50 Applied Design of Experiments and Taguchi Methods Ti = i th treatment effect e ij = error Equation (3. =  Y ij j 1 n (3. Yij represents the jth observation under the factor level or treatment i . Ya . Y. For hypothesis testing.2 Typical Data for Single-factor Experiment The data collected from a single-factor experiment can be shown as in Table 3.2.. Generally. the model errors are assumed to be normally independently distributed random variables with mean zero and variance s 2. represents the total of the observations under the i th treatment.. The objective here is to test the appropriate hypotheses about the treatment means and estimate them... And s 2 is assumed as constant for all levels of the factor. The appropriate procedure for testing a treatment means is ANOVA.) indicates the summation over that subscript... T2. # Y1. # a Ya1 Ya2 . T. 3) Yi .e. . It is partitioned into two components.2.6) is used as a measure of overall variability in the data.N–a .5) where N = an. =  a a i 1 Y i... Reject H0. SSTotal = SST + SSe A typical format used for ANOVA computations is shown in Table 3. the total number of observations.4) Y. n n i = 1. ) 2 i 1 j  1 a n (3. ….Single-factor Experiments 51 (3.2 TABLE 3. the name ANOVA is derived from partitioning of total variability into its component parts.a–1... a Y.2 Source of variation ANOVA: Single-factor experiment Degrees of freedom a – 1 Mean square F0 MST MSe Sum of squares SS T Between treatments MST = SST a  1 SSe N  a Within treatments (error) Total SSe SSTotal N –a N – 1 MSe = Mean square is also called variance.3 Analysis of Variance As already mentioned. 2. Total variation = variation between treatments + variation within the treatment or error SSTotal = SS due to treatments + SS due to error i. P-value can also be used. The total corrected Sum of Squares (SS) SSTotal =   (Yij  Y .. Generally. N (3. we use 5% level of significance (a = 5%) for testing the hypothesis in ANOVA. = Y. 3. =   Y ij i 1 j  1 (3. If F0 > Fa. 4 and 6 mm/min for study and decided to obtain four observations at each feed rate.7) 2 SSTotal =   Yij  CF i 1 j 1 a (3.52 Applied Design of Experiments and Taguchi Methods 3.2 N n (3.. = N = n = SS = CF = Ti.8) SST =  Ti2 . we can determine which experiment should be run first. this study consists of 12 experiments (3 levels ´ 4 observations). Suppose the first random number is 7.) total number of observations number of replications/number of observations under the i th treatment sum of squares correction factor ith treatment total CF = a T.2. by selecting a random number between 1 and 12.e. i. Thus. The randomization of test order is required to even out the effect of extraneous variables (nuisance variables).4 gives the test sheet with the order of experimentation randomized.. This procedure is repeated till all the experiments are scheduled randomly. ILLUSTRATION 3. He has selected three different feed rates. Table 3. . a test sheet has to be prepared as explained below. TABLE 3. 2.. Since the order of experimentation should be random. = grand total of all observations/response (Y. Then the number 7th experiment (observation) with feed rate 4 mm/min is run.1 Completely Randomized Design A manufacturing engineer wants to investigate the effect of feed rate (mm/min) on the surface finish of a milling operation.9) (3.10) SSe = SSTotal – SST Note that the factor level (treatment) totals are used to compute the treatment (factor) sum of squares.. n i 1  CF (3. The 12 experiments are serially listed in Table 3.4 Computation of Sum of Squares Let T.3 Feed rate (mm/min) 2 4 6 1 5 9 Experiment run number Observations (Surface roughness) 2 6 10 3 7 11 4 8 12 Now.3. 4 Order of experiment (time sequence) 1 2 3 4 5 6 7 8 9 10 11 12 Test sheet for Illustration 3. = 87.0 5. The hypothesis to be tested is H0 : m1 = m2 = m3 against H1: at least one mean is different..5 Feed rate (mm/min) 2 4 6 T. = 7.5. = treatment totals T. The data collected on the surface roughness are given in Table 3.4.4 35.6 9.1 Run number 7 3 11 2 6 1 5 9 4 10 8 12 Feed rate 4 2 6 2 4 2 4 6 2 6 4 6 Suppose that the engineer has conducted the experiments in the random order given in Table 3.7) = (87.5 7.1 Observations (Surface roughness) 7. TABLE 3.5)2 12 = 638.5 Ti .. = grand total These totals are calculated in Table 3. Let Ti.8 4.3 6.5 Y .88 (from Eq.8 9.8 8.2 8.5..29 The data can be analysed through analysis of variance.2 N Data from surface finish experiment of Illustration 3.35 8. 3..6 21. Computation of sum of squares Correction factor (CF) = T.2 7.Single-factor Experiments 53 TABLE 3.65 5.2 8.6 7.021 .5 4. Yi 30. 6)2 4 + (21. The error sum of squares is obtained by substraction.8) = {(7)2+ (7. the magnitude of the response may be a larger value involving three or four digits. ANOVA for the surface finish experiment (Illustration 3. These results will be same as those obtained on the original data.52 3.5)2 + … + (8.9 = 4. for determining the optimal level. we compute the sum of squares and carryout ANOVA.5 Effect of Coding the Observations Sometimes.5)2} – CF = 667.26. it is preferable to use the mean values on original data. ni – CF (from Eq.021 = (30.1) Sum of squares 25. That is.6.622 The analysis of variance is given in Table 3. . Using these data.434 F0 29. TABLE 3. H0 is rejected. To simplify the computations.4)2 4 +  CF  638.622 3.6 Source of variation Feed rate Error Total F5%.550 – 638.021 = 29. we adopt the coding of the observations (Response). we select an appropriate value and subtract from each individual observation and obtain coded observations.907 29.2.54 Applied Design of Experiments and Taguchi Methods 2 SSTotal =   Yij  CF i 1 j 1 a n (from Eq.52 > 4.2. The inference is that the treatment means are significantly different at 5% level of significance.6)2 + (21. 3. For this purpose.4)2 + (37.529 Degrees of freedom 2 9 11 Mean square 12.643 – 638. In such cases the resultant sum of squares would be a very large value.529 SST =  a Ti2 .9) (37.26 Since F0 = 29.811 0.021 = 25. However.5)2 12 = 663. 3.5)2 4 i 1 = (30. the treatment (feed rate) has significant effect on the surface finish. 6 7.0 from each observation.6 – 6.11) It is a linear statistical model with parameters m and Ti.2 1. it is evident that coding the observations by subtracting a constant value from all the observations will produce the same results.5)2 12 = 1.5)2 4 i 1 = (2. Ç (3. ni – CF  (6.5 –2. 2.0)2 + (0.5 Computation of sum of squares CF = T.13) .7. From this.021 2 SSTotal =   Yij  CF i  1 j 1 = {(0.2 2.6 0. N Ti = Y i .5 Ti .3 – 0. = 3.7 Feed rate (mm/min) 2 4 6 0. 2.622 These sum of square computations are same as the one obtained with the original data.8 –2.6)2 4 (7. . …..0 –1..Single-factor Experiments 55 For illustration purpose.2.2 Coded data for the surface finish experiment (Illustration 3.2 1.021 = 25.6 Estimation of Model Parameters The single-factor model is Yij = m + Ti + eij (3. ˆ = Y .6)2 4 + +  CF = 26..643 – 1. let us consider the experiment on surface finish (Table 3.12) i = 1.2 N a n = (3.5)2} – CF = 30.5)2 + … + (1. we obtain the coded data as given in Table 3. a (3. Subtracting 7. .529 SST =  a Ti2 .1) Observations (Surface roughness) 0.550 – 1. TABLE 3.5). 3.5 T.8 1.4 2.021 = 29.  Y. The total variance is partitioned to test that there is no difference in treatment means. T 2 = 5.745 £ m2 £ 5.29 = –1.095 3. j ) + tB /2. + tB /2. The observations are adequately described by the model Yij = m + Ti + eij . j )  tB /2.35 – 0. ) = Yij  Y i .35 – 2. ˆij = Yij – m – Ti The error e = Yij  Y.745 4.434 4 5.434 4 £ m2 £ 5.35 – 7.5 12 = 7.605 £ m2 £ 6.1. N  a MSe n  (Ni  N j )  (Y i.. N  a MSe n (3.7 Model Validation The ANOVA equation is an algebraic relationship.262 The 95% confidence interval for the treatment 2 is 5. the 95% confidence interval for the treatment 2 can be computed using Eq.  Y . (3.  Y.  tB /2.2.15) as follows: ˆ = N 87.94 At 5% significance level t0.025.9 = 2.262 0.14) The confidence interval for the ith treatment mean is given by Yi.  Y .56 Applied Design of Experiments and Taguchi Methods That is. This requires the following assumptions to be satisfied: 1.262 0. (3.. N  a MSe n  Ni  Y i..35 – 2. the overall mean m is estimated by the grand average and the treatment effect is equal to the difference between the treatment mean and the grand mean.16) For Illustration 3.29 ˆ = Y 2.  (Y i . N  a MSe n (3. say (mi – mj) is given by (Y i .  Y .35 – 0.15) And the confidence interval on the difference in any two treatment means. If these assumptions are valid.72 5.6 – 0.5 0. which should be removed from the data before the data are analysed. the estimate of any observation in the ith treatment is simply the corresponding treatment average.35 8. Rij = Yij  Yi. + (Y i.65 4 0. The residual for the jth observation of the i th treatment is given by ˆ (prediction error) Rij = Yij – Y ij ˆ =N ˆ ˆ +T where. ) = Y i.2 5.8 0. The errors have constant variance s 2.19) That is. The residuals are given in the left corner of each cell in Table 3. Check to determine the outliers An outlier is an observation obtained from the experiment which is considered as an abnormal response.6 – 0.55 8.5 8.8. the residuals must be structure less. (Eq.8. . Y ij i (3.17) = Y . Therefore.65 ˆ =Y Y ij i.1 are given in Table 3.45 6 0.65 6.18) (3.Single-factor Experiments 57 2. Rij = Yij  Yi.8 – 0. 3.  Y . That is. 3. The errors are independent and normally distributed. These can also be identified by examining the standardized residuals (zij) zij = Rij MSe (3. Estimation of residuals Data and residuals of Illustration 3.68 7. Any residual which is away from the straight line passing through the residual plot on a normal probability paper is an outlier. The following graphical analysis is done to check the model.2 0.32 9.15 4.2 – 0.20) .1) Observations (Surface roughness) 7.38 Residuals for the surface finish experiment (Illustration 3.19) TABLE 3.88 7..8 Feed rate 2 – 0.75 9. Residual (Rij) = Yij  Yi.3 7. This is carried out by analysing the residuals (Rij).8 0. (3.0 – 0.. the ANOVA test is valid. If the model is adequate.15 4.85 8. 434 = 1.5 FIGURE 3.29 This indicates that all residuals are less than ±2 standard deviations.0 1. there are no outliers. From Illustration 3. Check for the independence: If the plot of errors (residuals) in the order of data collection shows a random scatter. Model adequacy checking 1. Moderate departures from the straight line may not be serious. Figure 3.5 –1.85 0. since the sample size is small. 99 95 90 Per cent (%) 80 70 60 50 40 30 20 10 5 1 –1.9 gives the residuals .0 –0.1.0 Residual 0. the standardized residuals follow normal distribution with mean zero and variance one. s 2). 2. can be considered as an outlier. This can be verified in the normal plot also. A residual bigger than 3 or 4 standard deviations from zero.1 shows the plot of residuals on the normal probability paper for Illustration 3. the largest standard residual is Rij MSe = 0. Hence.1. Thus.73%) within ± 3.1 Normal probability plot of residuals for Illustration 3.1. Also observe that there are no outliers. Table 3.5 0. it indicates the violation of the independent assumption.5 1. The plot of residuals on a normal probability paper shall resemble a straight line for the assumption to be valid. we infer that the errors are independent. Check for normality: In this assumption errors are normally distributed. From the figure it can be inferred that the errors are normally distributed. If any correlation exists between the residuals.58 Applied Design of Experiments and Taguchi Methods If the errors are NID (0. about 95% of residuals should fall within ± 2 and all of them (99. The plot has been obtained as explained in Chapter 2. Figure 3. From Figure 3.0 –0.15 – 0.68 – 0.65 0.75 – 0.Single-factor Experiments 59 as per the order of experimentation (Random order).5 0 2 4 6 Order of experiment 8 10 12 FIGURE 3. it can be seen that the residuals are randomly scattered indicating that they are independent.72 0.1.2 shows the plot of residuals versus order of experimentation.55 0.5 Residual 0.65 0.9 Residuals as per the order of experimentation for Illustration 3. . TABLE 3.2.32 0.2 Plot of residuals vs.1 Run number 7 3 11 2 6 1 5 9 4 10 8 12 Feed rate 4 2 6 2 4 2 4 6 2 6 4 6 Residual – 0.45 0.38 Order of experiment (time sequence) 1 2 3 4 5 6 7 8 9 10 11 12 1.85 – 0.15 – 0. order of experimentation for Illustration 3.0 0. The best treatment is the one that corresponds to the treatment mean which optimizes the response. with the mean. Check for constant variance: If the model is adequate. 3.5 Residual 0. it is observed that no unusual structure is present indicating that the model is correct. the ANOVA reveals the rejection of null hypothesis. fitted values for Illustration 3. These tests are employed after analysis of variance is conducted. That is. Hence.3 shows the plot of residuals versus Y ij 1. the comparison among these treatment means may be useful.3. we can infer that the error variance increases If the spread of residuals increases as Y ij ˆ for Illustration 3.8 Analysis of Treatment Means Suppose in a single-factor experiment. there are differences between the treatment means. Other preferred treatments can be identified through multiple comparison methods. Figure 3. The following are some of the important pair wise comparison methods: · · · · Duncan’s multiple range test Newman–Keuls test Fisher’s Least Significant Difference (LSD) test and Turkey’s test .3 Plot of residuals vs. This is also useful to identify the best and preferred treatments for possible use in practice. From Figure 3.0 0.0 –0.1. The structure less and should not be related to any variable including the predicted response ( Y ij plots of residuals versus predicted response should appear as a parallel band centered about zero. the plot of residuals should be ˆ ).60 Applied Design of Experiments and Taguchi Methods 3. but which means differ is not known. ˆ increases.5 5 6 7 Fitted value 8 9 FIGURE 3.1.2. p = 2. the difference is considered as significant.5). That is. …. Cycle 2: · Test the difference between the second largest mean and the smallest mean with Ra–1 · Continue until all possible pairs of means a (a–1)/2 are tested. f) for p = 2.23) where qa (p. whence Kp is qa (p. Rp = ( SY ) ra(p. a. a = significance level f = degrees of freedom of error Step 4: Compute least significant ranges (Rp) for p = 2. 3. The comparison is done in a specific manner.21) a If n is different for the treatments. where LSD = ta/2. the procedure of this test is similar to that of Duncan’s multiple range test except that we use studentized ranges. f) SY . in Step 4 of Duncan’s procedure Rp is replaced by Kp. Compute the standard error (Se) of mean ( SY i ) = MSe n (3. 3. …. a (3. N–a 1 1  MSe  +   ni nj    (3. where nh = i 1  1/ni a Step 3: Obtain values of ra (p. from Duncan’s multiple ranges (Appendix A.4). Fisher’s Least Significant Difference (LSD) test In this test we compare all possible pairs of means with LSD. Newman–Keuls test Operationally. f ) for p = 2. The following are the steps: Step 1: Step 2: Arrange the a treatment means in ascending order.22) Step 5: Test the observed differences between the means against the least significant ranges (Rp) as follows: Cycle 1: · Compare the difference between largest mean and smallest mean with Ra · Compare the difference between largest mean and next smallest mean with Ra–1 and so on until all comparisons with largest mean are over. ….Single-factor Experiments 61 Duncan’s multiple range test This test is widely used for comparing all pairs of means. a. f) values are obtained from studentized range table (Appendix A. replace n by nh. 3. Inference: If the observed difference between any two means exceed the least significant range.24) . …. a (3. 3. a = number of treatments/levels f = degrees of freedom associated with MSe If the absolute difference between any two means exceed Ta. Duncan’s multiple range test also give good results. (3. if | Yi.  Y j . if | Yi. j )  qB (a. f ) 2 MSe n where. | > LSD. we conclude that the two means differ significantly. 4. f ) TB = B 2 We can also construct a set of 100(1 – this procedure as follows: (Y i .  Y .62 Applied Design of Experiments and Taguchi Methods If the absolute difference between any two means exceeds LSD.26) q (a. f ) MSe n 1 1  MSe  +   ni nj    (3.25) In this test we use studentized range statistic. LSD = t a/2. The power of Newman–Keul’s test is lower than that of Duncan’s multiple range test and thus it is more conservative. f ) (3. In a balanced design. Ta = qa(a.27) a)% confidence intervals for all pairs of means in MSe n . we conclude that the two population means mi and mj differ significantly. Based on studies made by some people.i  j  Ni  N j  (Y i. j ) + qB ( a. . N–a Turkey’s test 2 MSe n (3. When sample sizes are unequal. Therefore.28) Which pair-wise comparison method to use? There are no clear cut rules available to answer this question. That is. Fisher’s LSD test is very effective if the F-test in ANOVA shows significance at a = 5%. these two means are considered as significantly different. the following guidelines are suggested: 1. we say that the population means mi and mj differ significantly. n1 = n2 = … = n. Since Turkey’s method controls the overall error rate. That is. 2. All possible pairs of means are compared with Ta. | > Ta. many prefer to use it. 3.  Y j .  Y . at least for one i .. 94.15 SSe = SSTotal – SST = 811. B.15 = 135.11. The hypotheses to be tested are H0 : T1 = T2 = T3 = T4 H1 : Ti ¹ 0.25 = 811. 471 425 439 390 T. ILLUSTRATION 3.2 Observations 94 87 92 82 94 88 84 78 98 84 86 76 Ti .2 Brick Strength Experiment A brick manufacturer had several complaints from his customers about the breakage of bricks.8 78 = 148781.75 SST = i 1  Ti2 . He procures the clay from four different sources (A.25 SSTotal =   Yij2  CF i  1 j 1 = 149593 – 148781.2 discusses the above pair wise comparison tests. Five samples of bricks have been made with the clay from each source. n – CF a = (471) 2 + (425)2 + (439) 2 + (390)2 5  CF = 676.75 – 676.10. The breaking strength in kg/cm2 measured is given in Table 3. He suspects that it may be due to the clay used. TABLE 3. = 1725 Data analysis: CF = The computations required for ANOVA are performed as follows: (1725)2 20 a n Yi.6 The computations are summarized in Table 3.Single-factor Experiments 63 Illustration 3. He wants to determine the best source that results in better breaking strength of bricks. To study this problem a single-factor experiment was designed with the four sources as the four levels of the factor source.2 85 87.10 Source A B C D 96 82 88 75 89 84 89 79 Data on brick strength for Illustration 3. C and D). 15 135.4 Normal probability of residuals for Illustration 3.11 Source of variation Between treatments Error Total F0.5 show the normal plot of residuals and residuals versus fitted values respectively.5 –5. Figures 3. Se = MSe n = 8.0 FIGURE 3. Duncan’s multiple range test Step 1: Arrange the treatment means in the ascending order. we reject H0. the four treatments differ significantly.75 Degrees of freedom 3 16 19 Mean square 225.38 8.05. That is.0 –2.5 0. This indicates that the clay has significant effect on the breaking strength of bricks.5 5.475 F0 26.8 A 94.2 Compute the standard error of mean.475 5 = 1. 99 95 90 Per cent (%) 80 70 60 50 40 30 20 10 5 1 –7.2) Sum of squares 676.16 = 3.0 Residual 2.60 811.30 .59 Since F0 > Fa.4 and 3.3. Source Mean Step 2: D 78 B 85 C 87.2.64 Applied Design of Experiments and Taguchi Methods TABLE 3.24 ANOVA for the brick strength experiment (Illustration 3. 3. p = 2.8 – 78. fitted values for Illustration 3.199 Step 5: Compare the pairs of means.8 < 3.00 3.5 Plot of residuals vs.0 = 16. Step 3: Select Duncan’s multiple ranges from Appendix A.2 – 87.5 –5.8 = 6. 16) ´ Se The three Least Significant Ranges are given as follows: R2 3.2 – 78.0 – 78.0 65 2.23 Step 4: Compute LSR. Cycle 1: A vs D: 94. The pair wise comparisons are as follows. 4.0 = 7 > 3.2 > 4.0 –2.2.095 C vs B: 87.9 R3 4.199 A vs B: 94.90 Cycle 3: B vs D: 85.8 > 4.2 > 4.5 For ra (p.0 = 9.05(p. The Least Significant Range (LSR) = ra(p.90 Significant Significant Significant Significant Not Significant Significant .4 > 3.15 3.095 R4 4.90 Cycle 2: C vs D: 87.5 Residual 0.0 = 2.095 A vs C: 94.8 – 85. f). the ranges are: r0. The absolute difference is compared with LSR. 16): 3.2 – 85.Single-factor Experiments 5.0 80 84 Fitted value 88 92 96 FIGURE 3.0 = 9. for qa (p.2 – 85.16 = 2.90 2  8. 4. A BC D Fisher’s Least Significant Difference (LSD) test LSD: ta/2.2 > 5.625 A vs B: 94.90 Significant Significant Significant Significant Not significant Significant We see that the results of this test are similar to Duncan’s multiple range test.66 Applied Design of Experiments and Taguchi Methods We see that there is no significant difference between B and C. the ranges are: q0.745 R4 5. p = 2.05 Step 4: The Least Significant Range (LSR) = qa (p.8 = 6. f ).475 5 BC D D 78 B 85 C 87.8 – 85.025.475 5 . These are grouped as follows: A Newman–Keul’s test Step 1: Source Mean Step 2: Se = MSe n = 8.2) = 2.8 > 4.8 < 3.2 = 1.2 – 87.745 A vs C: 94.745 C vs B: 87. N–a 2 MSe n where t 0. And A and D differ significantly from B and C.05(p.2 > 4.12 × = 3.0 = 7 > 3. The absolute difference is compared with LSR.30 Step 3: From studentized ranges (Appendix A. 16): 3.90 Cycle 2: C vs D: 87.0 – 78.2 – 78.12 (Appendix A.0 = 9.00 3.0 = 2. Cycle 1: A vs D: 94. 3.4 > 3.0 = 9.9 R3 4.8 – 78.8 A 94.4).65 4. 16) × Se The three Least Significant Ranges are as follows: R2 3.625 Step 5: The pair-wise comparisons are as follows.0 = 16.90 Cycle 3: B vs D: 85. 27 Significant Significant Significant Not significant Significant Significant BC D Y1 vs Y4 : 94.4 > 5.05 = 4.8 = – 2. Source B: Y2 = 85.2 – 87.27 Source A: Y1 = 94.8 > 3.27 Y3 vs Y4 : 87.2 – 87.2.2 > 3. A Turkey’s test Ta = qa (a. f ) MSe n q0.2. Source D: Y4 = 78 Comparison of all possible pairs of means is done as follows. + tB /2.29) Yi.2 – 78.0 = – 9.0 – 87. Source D: Y4 = 78 Comparison is similar to Fisher’s LSD test. Y1 vs Y2 : 94.9 Y2 vs Y4 : 85.2 > 5. A We can also construct a confidence interval in this procedure using Eq. Source B: Y2 = 85.9 Multiple Comparisons of Means Using Contrasts A contrast is a linear combination of treatment totals and the sum of its coefficients should be equal to zero.0 – 78.8 > 5.8 = – 6.8 .0 = – 9.2.05 8.9 This test also gives the same result.0 = – 7 > 3. N B MSE n (3. (i = 1. (3.Single-factor Experiments 67 Source A: = Y1 = 94.8 – 78.8 < 5.2 > 3.2 – 85.0 = – 9. Source C: Y3 = 87.9 Y1 vs Y4 : 94.9 Significant Significant Significant Not significant Significant Significant BC D Y1 vs Y3 : 94.8 = – 6. 2. say C1 = T1.2 – 85.9 Y2 vs Y3 : 85.30) .8. (3. We can form a contrast. is the ith treatment total.8 < 3.05(4.475 5 = 5. N B MSE n  N2  Yi. ….0 = – 9.27 This test also produces the same result.4 > 3.8 – 78.0 = – 16.9 Y3 vs Y4 : 87.27 Y2 vs Y3 : 85.0 = – 16.8 = – 2. a).0 = – 7 > 5. Source C: Y3 = 87.2 – 78.16) = 4.27 Y2 vs Y4 : 85.0 – 87.27 Y1 vs Y3 : 94.0 – 78. Suppose Ti.2 > 5.29) 3. – T2. We consider the absolute difference in the two means for comparison. Y1 vs Y2 : 94.  tB /2.    i  C2 1 SSC =  a  = a i n  ci2 n  ci2 i 1 i 1 2 (3. That is.34) In any contrast if one of the treatments is absent. MSC is the contrast mean square. the number of contrasts should be equal to (a – 1).1.N–a Orthogonal contrasts Two contrasts with coefficients ci and di are orthogonal if  ci di = 0. reject H0 Alternatively. an F-test can be used. For a treatments.29) the sum of coefficients of the treatments is zero. = (each contrast will have one degree of freedom)  a  ci Ti. Testing C1 is equivalent to test the difference in two means. i 1 a (3. C1 = T1. F0 = SSC /1 MSe MSC MSe where. Thus. (+1) + (–1) = 0.  ci Ti. a a t0 = i 1 (3. – T2.68 Applied Design of Experiments and Taguchi Methods Note that in Eq. a t-test can be used to test the contrasts. Reject H0. a set of a – 1 orthogonal contrasts partition the treatment sum of squares into one degree of freedom a – 1 independent components. Sum of product of the coefficients of these two contrasts C1 and C2 is 0 (1) * (0) + (–1) * (0) + (1) * (0) + (–1) * (0) = 0. its coefficient is considered as zero. (3. C2 = T3. (3.31) ci2 nMSe  i 1 where. If | t0 | > ta/2. All these contrasts . So. – T4. if F0 > Fa.32) where.33) where Ci is the i th contrast totals. For example. C1 and C2 are said to be orthogonal. ci is the coefficient of the ith treatment. N–a. 2) Sum of squares 676. – T2.10 8. Generally. + T2 – T3. By substituting the values of treatment totals (Table 3. – T4.33 F5%.12. This will avoid the bias if any that may result due to examination of data (if formed after experimentation). all the three contrasts are significant.15 Thus. Let these contrasts be. C1 = 471 – 425 = 46 C2 = 471 + 425 – 439 – 390 = 67 C3 = 439 – 390 = 49 The sum of squares of these contrast are SSC1 = (C1 ) 2 n  ci2 = (46)2 5(2) = 211. C3 = T3.60 224.60 SSC2 = SSC3 = (67)2 5(4) (49)2 5(2) Total = 224.75 Degrees of freedom 3 1 1 1 16 19 Mean squares 225. the treatment sum of squares is partitioned into three (a – 1) single degree of freedom of contrast sum of squares. we can form three orthogonal contrasts.45 = 240.60 811.2. TABLE 3.10). B .49. the contrasts are formed prior to experimentation based on what the experimenter wants to test.15 211.45 240. These are summarized in Table 3.475 F0 26.16 = 4. That is. C and D).96 26. C1 = T1. we have four treatments or levels (A.10 = 676.12 Source of variation Between treatments C1 C2 C3 Error Total ANOVA for testing contrasts (Illustration 3.38 211.Single-factor Experiments 69 should be orthogonal. the sum of the contrast SS is equal to SST. At 5% significance level.59 24.1.45 240.48 28. . – T4. C2 = T1. Hence. Note that any pair of these contrasts is orthogonal.10 135.60 224. Testing contrasts: In Illustration 3. which contribute to variability that can be controlled. In this design we control the variation due to one source of noise. In this design the blocks represent a restriction on randomization.6 A nuisance factor. The blocks can be batches of material. the error variance would be large and sometimes we may not be able to attribute whether the variation is really due to treatment.70 Applied Design of Experiments and Taguchi Methods Thus. In order to separate the variability between the samples from the error. each drill bit is used once on each of the four samples. But within the block randomization is permitted. it will be a completely randomized design. days.6. when the noise factor is known and controllable. This becomes the randomized complete block design. we use randomized block design. As shown in Figure 3. people. the difference in strength between A and B. We want to determine whether these four drill bits produce the same surface finish or not.6. A nuisance factor or noise factor affects response. The samples are the blocks as they form more homogenous experimental units. it is difficult to conclude whether the surface finish is due to the drill bits or samples and the random error will contain both error and variability between the samples. any variation between treatments can be attributed to the drill bits.3 RANDOMIZED COMPLETE BLOCK DESIGN In any experiment variability due to a nuisance factor can affect the results. If these samples are homogenous (have more or less same metallurgical properties). Suppose there are four different types of drill bits used to drill a hole. If he assigns the samples randomly to the four drill bits. This is explained by an example. Blocking principle is used in the design If the nuisance factor is not addressed properly in design. Nuisance factor Unknown and uncontrollable Known and uncontrollable Known and controllable Randomization principle is used in the design The factor can be included in the design and data can be analysed FIGURE 3. If the experimenter decides to have four observations for each drill bit. and C and D is significant and also the difference between the sum of means of A and B and C and D is significant. If the samples differ in metallurgical properties. machines. . different laboratories. This design is widely used in practice. he requires 16 test samples. etc. A nuisance factor can be treated in experiments as shown in Figure 3. 3. Complete indicates that each block contains all the treatments (drill bits). j Data analysis: 9 19 28 18 74 Surface roughness data (Illustration 3.36) = 92 + 92 + … + 92 + 92 – CF = 624. = 305 Ti.. TABLE 3.Single-factor Experiments 71 3.1 Statistical Analysis of the Model Let a treatments to be compared and we have b blocks. 2.35) where.. The effects model is i = 1. Each drill bit is used once on each specimen resulting in a randomized complete block design.13 Types of drill bit 1 1 2 3 4 T.. The order of testing the drill bits on each specimen is random. b (3.3 Randomized Complete Block Deign Consider a surface finish testing experiment described in Section 3.. There are four different types of drill bits and four metal specimens.3) Specimens (blocks) 2 10 22 30 23 85 3 8 18 23 21 70 4 12 23 22 19 76 39 82 103 81 T..13.. . = overall effect Ti = effect of ith treatment Bj = effect of jth block eij = random error SSTotal = SST + SSBlock + SSe df: N – 1 = (a – 1) + (b – 1) + (a – 1) (b – 1) m ILLUSTRATION 3.94 .. The computation required for ANOVA are as follows: (305) 2 16 a b Correction factor (CF) = = 5814.06 2 SSTotal =   Yij  CF i 1 j 1 (3. The data obtained are surface roughness measurements in microns and is given in Table 3.3. 2. . a Yij = mÿ + Ti + Bj + eij    j = 1.3. here the treatments are fixed and we can use any one multiple comparison method for comparing all pairs of treatment means. where. That is. j b SSB = j 1 a – CF (3.9 = 3.3.05.3.14 Source of variation Types of drill bits (Treatments) Specimens (Blocks) Error Total F0.39) . different types of drill bits produce different values of surface roughness.06 These computations are summarized in Table 3. the null hypothesis is rejected.  Y .94 Degrees of freedom Mean squares 3 3 9 15 179. Rij = Yij  Yi.19 55.  Y j .38) = (74)2 + (85)2 + (70)2 + (76)2 4 – CF = 30.86 Conclusion: Since F0.06 6.06 624. Also. + Y j .3) Sum of squares 539. The residuals are computed as follows: ˆ R = Y – Y ij ij ij ANOVA for the effect of types of drill bits (Illustration 3. 2  T. the number of replications in each treatment in this case is the number of blocks (b). Y ij So.64 ˆ = Y i.72 Applied Design of Experiments and Taguchi Methods a SST = i 1  Ti.69 The block sum of squares is computed from the block totals.69 30. + Y.12 F0 29.89 10.39 1.14 TABLE 3. (3. Note that. The analysis of residuals can be performed for model adequacy checking as done in the case of completely randomized design.37) = (39)2 + (82)2 + (103)2 + (81)2 4 – CF = 539.9 is less than F0 .05.19 SSe = SSTotal – SST – SSBlock = 55.. b 2 – CF (3.. .41) where C includes all other terms not involving X. Coded data are obtained by subtracting 20 from each observation and given in Table 3.42) .Single-factor Experiments 73 3.j –11 –1 8 –2 –6 2 –10 2 X 3 –5 3 –12 –2 3 1 –10 4 –8 3 2 –1 –4 –41 2 13 1 –25 Ti.3.2 Estimating Missing Values in Randomized Block Design Sometimes we may miss an observation in one of the blocks. (a  1) (b  1) X= (3. j  CF     i 1 j 1    b  i 1 a  j 1     (3.2 1 ab (Y. A missing observation introduces a new problem into the analysis because treatments are not orthogonal to blocks. Types of drill bit The missing observation X is estimated such that the SSe are minimized.. The missing observation is estimated such that the sum of squares of errors is minimized.  = 13.  1 a 2  Y.15. TABLE 3.j + X )2 + (3. + bY. + X )2 + C = X2  2 (Yi . That is. we will miss an observation.  CF     Y. Under these circumstances we can estimate the missing observation approximately and complete the analysis. j + 1 ab Y. and Y¢ represent the total without the missing observation that is.15 Coded data for Illustration 3. The table below shows the coded data for the surface finish testing experiment and suppose X is the missing observation. =  25 Y3. but the error degrees of freedom are reduced by one. + X)  1 a (Y. In such cases.40) =   Yij2  1 b 1 b 2  Yi. X can be attained from dSSe dX = 0  aYi . When the experimental units are animals or plants.3 with one missing observation Specimens (blocks) 1 1 2 3 4 T.  a b   1 a 2 1 b 2  SSE =    Yij2  CF   Yi. This may happen due to a reason beyond the control of the experimenter.2  =  5 and Y...  j  Y. an animal/plant may die or an experimental unit may get damaged etc. every treatment does not occur in every block. Y. a randomized incomplete block design must be used.16 Source of variation Types of drill bits Specimens (Blocks) Error Total F0.94 F0 27. When all treatment comparisons are equally important. The procedure is to prepare the detonators using a particular compound. TABLE 3. ANOVA with one missing observation Sum of squares 493. Note that the result does not change.29 Degrees of freedom Mean squares 3 3 8 14 164. The order of the experimentation in each block is random. detonators are considered as blocks. Currently they want to experiment with four different types of compounds.66 47.17.4 Balanced Incomplete Block Design (BIBD) A company manufactures detonators using different types of compounds. .3).33. Further. any pair of treatments occurs together the same number of times as any other pair. ILLUSTRATION 3.16. test it and measure the ignition time. They wish to study the effect of type of compound on the ignition time. If only 3 experiments are possible in each day.3. we go for balanced incomplete block design. Similarly if a batch of raw material (block) is just sufficient to conduct only three treatments out of four.55 5.4 BALANCED INCOMPLETE BLOCK DESIGN (BIBD) If every treatment is not present in every block.37 5.86 At 5% level of significance. the treatment effect is significant. that is. With reference to the surface finish testing experiment (Illustration 3.67 0. Suppose in a randomized block design (day as block). four experiments are to be conducted in each block for each treatment. Because variation in the detonators may affect the performance of the compounds.5 557.05.13 16.8 = 3. it is called randomized incomplete block design. The design and data are given in Table 3. we use BIBD. the treatment combination in each block should be selected in a balanced manner. This type of design is called a balanced incomplete block design (BIBD). suppose the size of the specimen is just enough to test three drill bits only. each compound is just enough to prepare three detonators. in terms of original data X = 26. The ANOVA with the estimated value for the missing observation is given in Table 3.74 Applied Design of Experiments and Taguchi Methods = 4(13) + 4(  5)  (  25) (4  1) (4  1) = 6. Tables are available for selecting BIBD designs for use (Cochran and Cox 2000). Therefore. Incomplete block designs are used when there is a constraint on the resources required to conduct experiments such as the availability of experimental units or facilities etc.93 3.33 That is. The ignition time of the detonators is an important performance characteristic. we go for BIBD. j Data for BIBD (Illustration 3. a Yij = mÿ + Ti + Bj + eij   j = 1..   N  i j   (3. s 2) random error... 3.1 Statistical Analysis of the Model The statistical model is i = 1. b (3.46) . m For the statistical model considered.17 Treatment (Compound) 1 2 3 4 T. 2.4.4) Detonators (blocks) 1 23 19 25 — 67 2 23 21 — 31 75 3 27 — 33 42 102 4 — 30 36 41 107 73 70 94 114 T..Single-factor Experiments 75 TABLE 3. = 351 Ti.43) Yij is the ith observation in the j th block = overall effect Ti = effect of ith treatment Bj = effect of jth block eij = NID (0.. we have a = number of treatments = 4 b = number of blocks = 4 r = number of replications (observations) in each treatment = 3 K = number of treatments in each block = 3 SSTotal = SST(adjusted) + SSBlock + SSe  T2  SSTotal =   Yij2  CF  CF = . K  Qi2 SST (adjusted) = where. 2. .. Qi = adjusted total of the i th treatment.45) The treatment sum of squares is adjusted to separate the treatment and block effects because each treatment is represented in a different set of r blocks. i 1 a Ma (3... .44) (3. 49) For Illustration 3. 2 b (3.4. j – CF with b – 1 degrees of freedom.47) nij = 1.08 = 2(4) . a (3. j . otherwise nij = 0. K = 3 and l = 2 CF = (351)2 12 = 10266.4)] = 73  Q2 = 70  Q3 = 94  (67 + 75 + 102 + 0) =  (67 + 75 + 0 + 107) =  25 3 39 3 (67 + 0 + 102 + 107) = 6 3 58 3 Q4 = 114  a 1 3 (0 + 75 + 102 + 107) = K  Qi2 SST(adjusted) = i 1 Ma (Eq.46)  25 2  39 2  6 2  58 2  3    +   +   +     3   3  3  3     = 231. if treatment appears in block j.48) 1 K j 1  T. – 1 K j 1  nijY.2 + T.3 + 0(T. M= SSBlock = r (K  1) a  1 (3. r = 3.75 SSTotal = 10908 – 10266.1 + T. b i = 1.92 Adjusted treatment totals: Q1 = T1. – 1 3 1 3 1 3 1 3 [T. ….76 Applied Design of Experiments and Taguchi Methods Qi = Yi.25 SSBlock = 1 3 (672 + 752 + 1022 + 1072) – CF = 388. 3.75 = 638. a = 4. b = 4. 2. The sum of adjusted totals will be zero and has a – 1 degrees of freedom. ˆ = KQi T i Ma Standard error ( Se ) = K ( MSe ) (3. Selection of Latin square designs is discussed in Fisher and Yates (1953). one is the batch of material and the second one is the operators. TABLE 3.9 = 5. In Latin square design two sources of variability is eliminated through blocking in two directions.92 18. In this design. Any multiple comparison method may be used to compare all the pairs of adjusted treatment effects. each letter appears only once in each row and only once in each column.18.. In general a Latin square of order P is a P ´ P square of P Latin letters such that each Latin letter appears only once in a row and only once in each column.05. we tried to control/eliminate one source of variability due to a nuisance factor. Thus.51) 3.41 < F0. B. In this design the treatments are denoted by the Latin letters A. …. Thus. we conclude that the compounds used has a significant effect on the ignition time. the blocking principle is used to block the batch of material as well as the operators. A 5 ´ 5 Latin square design is shown in Figure 3. C. At present they are experimenting with four different formulations. This design can best be explained by an example.3.10 — Sum of squares 231.65 F0 21. there are two sources of variation. These formulations are prepared by different operators who differ in their skill and experience level.25 638.Single-factor Experiments 77 The computations are summarized in Table 3.03 129. The levels of the two blocking factors are assigned randomly to the rows and columns and the treatments of the experimental factor are randomly assigned to the Latin letters.7. This imposes restriction on randomization in both the directions (column wise and row wise).50) Ma (3. etc.64 3.08 388. . Hence.4 Degrees of freedom Mean squares 3 3 5 11 77. Each formulation is prepared from a batch of raw material that is just enough for testing four formulations.5 LATIN SQUARE DESIGN In Randomized complete block design.18 Source of variation Treatments (adjusted for blocks) Blocks Error Total ANOVA for the Illustration 3. the design consists of testing each formulation only once with each batch of material and each formulation to be prepared only once by each operator. A space research centre is trying to develop solid propellant for use in their rockets. and hence it is called Latin square design.25 Since F0. 78 Applied Design of Experiments and Taguchi Methods A C D E B FIGURE 3.7 B D E A C C E A B D D A B C E E B C D A 5 ´ 5 Latin square design. 3.5.1 The Statistical Model i = 1, 2, ..., P Yijk = mÿ + Ai + Tk + Bj + eijk   j = 1, 2, ..., P k = 1, 2, ..., P  ÿÿm (3.52) Yijk = observation in the ith row and j th column for the kth treatment = overall mean Ai = row effect Tk = effect of kth treatment Bj = column effect eijk = NID(0, s 2) random error. The total sum of squares is partitioned as SSTotal = SSrow + SScol + SST + SSe df: P2 – 1 = (P – 1) + (P – 1) + (P – 1) + (P – 2) (P – 1) (3.53) (3.54) The appropriate statistic for testing no difference in treatment means is F0 = MST MSe (3.55) The standard format for ANOVA shall be used. ILLUSTRATION 3.5 Latin Square Design Suppose for the solid propellant experiment described in Section 3.5, the thrust force developed (coded data) from each formulation (A, B, C and D) is as follows (Table 3.19) The treatment totals are obtained by adding all As, all Bs, etc. A = 75, B = 43, C = 58, D = 22 Single-factor Experiments 79 TABLE 3.19 Batch of raw material 1 2 3 4 Column total Data for the Illustration 3.5 (Latin Square Design) Operators 1 A = 14 D= 4 C = 18 B = 14 50 2 D = 6 C = 14 B = 11 A = 20 51 3 C = 10 B = 10 A = 22 D = 5 47 4 B= 8 A = 19 D = 7 C = 16 50 38 47 58 55 T… = 198 Row total The correction factor (CF) = (198)2 16 = 2450.25 SST = (75)2 + (43)2 + (58)2 + (22)2 4 – CF = 2830.5 – 2450.25 = 380.25 using row totals, SSrow = 2510.5 – CF = 60.25 from column totals, SScol = 2452.5 – CF = 2.25 SSTotal = 2924.0 – CF = 473.75 SSe = 473.75 – 442.75 = 31.0 These computations are summarized in Table 3.20. TABLE 3.20 Source of variation Formulation (Treatment) Batch of material Operators Error Total F0.05,3,6 = 4.46 Since F0 > F0.05,3,6 = 4.46, the type of formulation has significant effect on the thrust force developed. ANOVA for Latin Square Design (Illustration 3.5) Sum of squares 380.25 60.25 2.25 31.00 473.75 Degrees of freedom Mean squares 3 3 3 6 15 126.75 20.08 0.75 5.17 F0 24.52 — — — 80 Applied Design of Experiments and Taguchi Methods 3.6 GRAECO-LATIN SQUARE DESIGN In Latin square design, we try to control variability from two nuisance variables by blocking through rows and columns. In any single-factor experiment if three nuisance variables are present, we use Graeco-Latin Square Design. In this design, in addition to rows and columns, we use Greek letters (a, b, g, d, etc.) to represent the third nuisance variable. By superimposing a P ´ P Latin square on a second P ´ P Latin square with the Greek letters such that each Greek letter appears once and only once with each Latin letter, we obtain a Graeco-Latin square design. This design allows investigating four factors (rows, columns, and Latin and Greek letters) each at P-levels in only P2 runs. Graeco-Latin squares exist for all P ³ 3, except P = 6. An example of a 4 ´ 4 Graeco-Latin square design is given in Table 3.21. TABLE 3.21 Rows 1 1 2 3 4 Aa Bd Cb D g A 4 ´ 4 Graeco-Latin square design Columns 2 B b Ag Da C d 3 C g D b Ad Ba 4 D d Ca Bg A b 3.6.1 The Statistical model i = 1, 2, ..., P   j = 1, 2, ..., P  k = 1, 2, ..., P  l = 1, 2, ..., P Yijkl = mÿ + Ai + Bj + Ck + Dl + e ijkl (3.56) where, = overall effect Ai = effect of ith row Bj = effect of jth column Ck = effect of Latin letter treatment k Dl = effect of Greek letter treatment l eijkl = NID(0, s 2) random error m The analysis of variance is similar to Latin square design. The sum of squares due to the Greek letter factor is computed using Greek letter totals. Single-factor Experiments 81 PROBLEMS 3.1 The effect of temperature on the seal strength of a certain packaging material is being investigated. The temperature is varied at five different fixed levels and observations are given in Table 3.22. TABLE 3.22 Temperature (°C) 105 110 115 120 125 5.0 6.9 10.0 12.2 9.4 Data for Problem 3.1 Seal strength (N/mm2) 5.5 7.0 10.1 12.0 9.3 4.5 7.1 10.5 12.1 9.6 4.9 7.2 10.4 12.1 9.5 (a) Test the hypothesis that temperature affects seal strength. Use a = 0.05. (b) Suggest the best temperature to be used for maximizing the seal strength (use Duncan’s multiple range test). (c) Construct a normal probability plot of residuals and comment. (d) Construct a 95% confidence interval for the mean seal strength corresponding to the temperature 120°C. 3.2 Re-do part (b) of Problem 3.1 using Newman–Keuls test. What conclusions will you draw? 3.3 A study has been conducted to test the effect of type of fixture (used for mounting an automotive break cylinder) on the measurements obtained during the cylinder testing. Five different types of fixtures were used. The data obtained are given in Table 3.23. The data are the stroke length in mm. Five observations were obtained with each fixture. Test the hypothesis that the type of fixture affects the measurements. Use a = 0.05. TABLE 3.23 Data for Problem 3.3 Type of fixture 1 1.24 1.24 1.09 1.18 1.25 2 1.30 1.24 1.31 1.13 1.20 3 1.21 1.21 1.33 1.10 1.22 4 1.18 1.12 1.25 1.31 1.07 5 1.12 1.18 1.22 1.13 1.14 3.4 An experiment was conducted to determine the effect of type of fertilizer on the growth of a certain plant. Three types of fertilizers were tried. The data for growth of the plants (cm) measured after 3 months from the date of seedling is given in Table 3.24. 82 Applied Design of Experiments and Taguchi Methods TABLE 3.24 Data for Problem 3.4 Type of fertilizer 1 12.4 13.1 11.1 12.2 11.3 12.1 2 16.1 16.4 17.2 16.6 17.1 16.8 3 31.5 30.4 31.6 32.1 31.8 30.9 (a) Test whether the type of fertilizer has an effect on the growth of the plants. Use a = 0.05. (b) Find the p-value for the F-statistic in part (a). (c) Use Tukey’s test to compare pairs of treatment means. Use a = 0.01. 3.5 An experiment was run to investigate the influence of DC bias voltage on the amount of silicon dioxide etched from a wafer in a plasma etch process. Three different levels of DC bias were studied and four replicates were run in a random order resulting the data (Table 3.25). TABLE 3.25 DC bias (volts) 398 485 572 283.5 329.0 474.0 Data for Problem 3.5 Amount etched 236.0 330.0 477.5 231.5 336.0 470.0 0.05. 228.0 384.5 474.5 Analyse the data and draw conclusions. Use a= 3.6 A study was conducted to assess the effect of type of brand on the life of shoes. Four brands of shoes were studied and the following data were obtained (Table 3.26). TABLE 3.26 Data for Problem 3.6 Brand 1 12 14 11 13 10 (a) (b) (c) (d) Shoe life in months Brand 2 Brand 3 14 17 13 19 14 20 21 26 24 22 Brand 4 18 16 21 20 18 Are the lives of these brands of shoes different? Use a = 0.05. Analyse the residuals from this study. Construct a 95% confidence interval on the mean life of Brand 3. Which brand of shoes would you select for use? Justify your decision. Single-factor Experiments 83 3.7 Four different fuels are being evaluated based on the emission rate. For this purpose four IC engines have been used in the study. The experimenter has used a completely randomized block design and obtained data are given in Table 3.27. Analyse the data and draw appropriate conclusions. Use a = 0.05. TABLE 3.27 Type of fuel 1 F1 F2 F3 F4 0.25 0.45 0.50 0.28 Data for Problem 3.7 Engines (Block) 2 0.21 0.36 0.76 0.14 3 0.15 0.81 0.54 0.11 4 0.52 0.66 0.61 0.30 3.8 Four different printing processes are being compared to study the density that can be reproduced. Density readings are taken at different dot percentages. As the dot percentage is a source of variability, a completely randomized block design has been used and the data obtained are given in Table 3.28. Analyse the data and draw the conclusions. Use a = 0.05. TABLE 3.28 Type of process 1 Offset Inkjet Dyesub Thermalwax 0.90 1.31 1.49 1.07 Data for Problem 3.8 Dot percentages (Block) 2 0.91 1.32 1.54 1.19 3 0.91 1.33 1.67 1.38 4 0.92 1.34 1.69 1.39 3.9 A software engineer wants to determine whether four different types of Relational Algebraicjoint operation produce different execution time when tested on 5 different queries. The data given in Table 3.29 is the execution time in seconds. Analyse the data and give your conclusions. Use a = 0.05. TABLE 3.29 Type 1 1 2 3 4 1.3 2.2 1.8 3.9 Data for Problem 3.9 Queries (Block) 2 3 4 1.6 2.4 1.7 4.4 0.5 0.4 0.6 2.0 1.2 2.0 1.5 4.1 5 1.1 1.8 1.3 3.4 84 Applied Design of Experiments and Taguchi Methods 3.10 An oil company wants to test the effect of four different blends of gasoline (A, B , C, D) on fuel efficiency. The company has used four cars for testing the four types of fuel. To control the variability due to the cars and the drivers, Latin square design has been used and the data collected are given in Table 3.30. Analyse the data from the experiment and draw conclusions. Use a = 0.05. TABLE 3.30 Driver I 1 2 3 4 D B C A = = = = 15.5 16.3 10.8 14.7 B C A D II = = = = 33.9 26.6 31.1 34.0 C A D B Data for Problem 3.10 Cars III = = = = 13.2 19.4 17.1 19.7 A D B C IV = = = = 29.1 22.8 30.3 21.6 3.11 An Industrial engineer is trying to assess the effect of four different types of fixtures (A, B, C, D) on the assembly time of a product. Four operators were selected for the study and each one of them assembled one product on each fixture. To control operator variability and variability due to product, Latin square design was used. The assembly times in minutes are given in Table 3.31. Analyse the data from the experiment and draw conclusions. Use a = 0.05. TABLE 3.31 Operator I 1 2 3 4 A B C D = = = = 2.2 1.8 3.2 4.4 B A D C Data for Problem 3.11 Product II = = = = 1.6 3.4 4.2 2.7 C D A B III = = = = 3.6 4.4 2.6 1.5 D C B A IV = = = = 4.2 2.8 1.8 3.9 3.12 Suppose in Problem 3.11 an additional source of variation due to the work place layout/ arrangement of parts used by each operator is introduced. Data has been collected using the following Graeco-Latin square design (Table 3.32). Analyse the data from the experiment and draw conclusions. Use a = 0.05. TABLE 3.32 Operator I 1 2 3 4 = bD = gC = dB = Data for Problem 3.12 Product II 2.5 5.6 4.7 3.2 III IV aA bC aB dA gD = = = = 4.0 3.8 2.7 5.1 g B = 3.4 d C = 4.5 a D = 5.9 b A = 2.2 d D = 5.3 g A = 2.1 b B = 3.8 a C = 4.4 Further we want to investigate the temperature at two levels (70°C and 90°C) and pressure at two levels (200 MPa and 250 MPa). In factorial experiments. The main purpose of these experiments is to compare the treatments in all possible pairs in order to select the best treatment or its alternatives.2 TWO-FACTOR EXPERIMENTS Suppose we want to study the effect of temperature and pressure on the reaction time of a chemical process.C H A P T E R 4 Multi-factor Factorial Experiments 4.1 INTRODUCTION In a single-factor experiment.1 Pressure (Mpa) Two-factor factorial design Temperature (°C) 60 200 250 1 3 90 2 4 Note that there are four treatment combinations (1. The two-factor factorial design will be represented as given in Table 4. only one factor is studied. every level of one factor is combined with every level of other factors. That is. we are interested in testing the main effects as well as interaction effects. Suppose in the above two-factor experiment. the reaction time at the four treatment combinations can be graphically represented as shown in 85 .1. 4. If the number of factors is only two. TABLE 4. When all the possible treatments are studied. we call it a full factorial experiment. combination of two or more levels of more than one factor is the treatment. 2. In factorial experiments. it will be a two-factor factorial experiment. When the number of factors involved in the experiment is more than one. we call it a factorial experiment. 3 and 4) in the two-factor design as given in Table 4. Let us study these two aspects with an example.1. And the levels of the factor are the treatments. 1(b). Usually in a two-level factorial design. Interaction effect is computed as the average difference between the factor effects explained as follows: From Figure 4. 80 100 250 (High) Pressure 250 (High) Pressure 80 30 200 (Low) 110 130 200 (Low) 110 130 60 (Low) Temperature (a) 90 (High) 60 (Low) Temperature (b) 90 (High) FIGURE 4.1(a) and 4. at the low level of the factor. Hence. Main effect of a factor: The effect of a factor is defined as the change in response due to a change in the level of the factor. the response increases at the two levels of the other factor. pressure.1 Representation of response data for a two-factor experiment.86 Applied Design of Experiments and Taguchi Methods Figures 4.1(b). the behaviour of response is different at two levels and we say that there is an interaction effect. Average effect of pressure = 1/2{(100 – 130) + (80 – 100)} = –30 Interaction effect: The combined effect due to both the factors is called the interaction effect. The effect due to temperature in this experiment (Figure 4. Temperature effect at the low level of pressure = 130 – 110 = 20 Temperature effect at the high level of pressure = 30 – 80 = –50 Interaction effect = 1/2{(–50) – (20)} = –35 . the effect due to temperature and the effect due to pressure are the main effects. it is observed that. the response increases and at the high level the response decreases. For example in the above experiment. when temperature changes from 60°C to 90°C (Figure 4. From Figure 4. This is called the main effect of a factor because it pertains to the basic factor considered in the experiment.1(a)) is the average of the effect at the high level pressure and at the low level of pressure. from Figure 4. That is. we can find the effect of pressure. when temperature changes from 60°C to 90°C.1(a)). pressure. we represent the levels of the factor as low (–) and high (+). it is observed that the behaviour of response at the two levels of the other factor is same.1(b).2(a). That is. Thus. Effect of temperature at the high level of pressure = (100 – 80) = 20 Effect of temperature at the low level of pressure = (130 – 110) = 20 Average effect of temperature = 1/2(20 + 20) = 20 Similarly. On the other hand. we say that there is no interaction effect. 2a and 4.. the presence or absence of interaction effects in factorial experiments can be ascertained through testing using ANOVA.2 Plot of response data for the two-factor experiment shown in Figure 4. . we can compute interaction effect and it will be same.2. Pressure effect at the low level of temperature = 80 – 110 = –30 Pressure effect at the high level of temperature = 30 – 130 = –100 Interaction effect = 1/2{(–100) – (–30)} = –35 The presence of interaction effect is illustrated graphically in Figure 4.2.2a and 4. b k = 1.1b respectively..2b. the two response lines are not parallel indicating the presence of interaction effect. Whereas in Figure 4.. However.1) .2.2b shown below are the plots of response data from Figures 4.2...2b. The effects model is i = 1.2a are approximately parallel indicating that there is no interaction effect between temperature and pressure. n  m = overall effect Ai = effect of ith level of factor A (4. 4. ..Multi-factor Factorial Experiments 87 With respect to the factor (pressure) also. ..1a and 4. Figures 4..1.1 The Statistical Model for a Two-factor Experiment Let the two factors be denoted by A and B and both are fixed.. 140 120 140 Response 120 Pressure (High level) 100 80 60 40 Pressure (Low level) Response 100 80 Pr es su re (H igh su Pres l) leve ow re (L lev el) 60 Temperature 90 60 Temperature 90 (a) Without interaction (b) With interaction FIGURE 4. Note that the two response lines in Figure 4. a Yijk = m + Ai + Bj + (AB)ij + eijk   j = 1. n i 1 j 1  CF  SS A  SSB (4. we shall discuss how these effects are tested using ANOVA.2) (4.7) (4.. j H1 : at least one (AB)ij ¹ 0 Now.10) j 1 SSAB =   Tij2.. we are interested in testing the main effects of A and B and also AB interaction effect.3) (4.. The ANOVA equation for this model is: SSTotal = SSA + SSB + SSAB + SSe and the degrees of freedom is: abn – 1 = (a – 1) + (b – 1) + ( a – 1) (b – 1) + ab (n – 1) Computation of sum of squares: Correction factor (CF) = where.9) SSB =  a  CF (4. = Grand total and N = abn 2 SSTotal =    Yijk  CF i  1 j 1 k  1 a b n 2 T.4) (4... = Aa = 0 H1 : at least one Ai ¹ 0 Effect of B: H0 : B1 = B2 = . = Bb = 0 H1 : at least one Bj ¹ 0 Interaction effect: H0 : (AB)ij = 0 for all i.88 Applied Design of Experiments and Taguchi Methods Bj = effect of jth level of factor B ABij = effect of interaction between A and B eijk = random error component In this model.. T..8) SSA =  b a Ti2 . The hypotheses for testing are as follows: Effect of A: H0 : A1 = A2 = ..12) SSe = SSTotal – (SSA + SSB + SSAB) . (4. 2 j. an b i 1  CF (4..5) (4.11) (4.6) N (4. bn T. .. .14) i = 1.2 gives the summary of ANOVA computations for a two-factor factorial fixed effect model. The model for the two-factor factorial experiment is: Yijk = m + Ai + Bj + ABij + e (ij) k (4. ˆ =Y  Y B j ... TABLE 4.. ˆ =N ˆ +B ˆ + ( AB ˆ ) ˆ +A Y ijk i j ij (4. 2.  Y. + Y.  Yi .j. The various estimates are discussed as follows: ˆ = Y.Multi-factor Factorial Experiments 89 Table 4. (on substitution) Estimation of residuals (Rijk): the predicted value.2 Source of variation A The ANOVA for a two-factor factorial experiment Degrees of freedom a– 1 Mean squares SS A a 1 SSB b 1 SS AB (a  1) (b  1) Sum of squares SSA F0 MS A MSe MSB MSe MS AB MSe B SSB b– 1 AB SSAB (a – 1) (b – 1) Error Total SS e SSTotal ab(n – 1) abn – 1 SSe ab (n  1) 4..2 Estimation of Model Parameters The parameters of the model are estimated as follows. j .19) = Yij .. N (4...2. ˆ ) =Y  Y  A ˆ  B ˆ ( AB ij ij .17) (4. 2.13) The overall effect m is estimated by the grand mean.... . The residual is the difference between the observed value and ...15) (4.16) ˆ =Y  Y A i i... b (4... The deviation of the factor mean from the grand mean is the factor effect. . i j = Yij .18) (4. . a j = 1.. 186 205 157 T… = 548 Data analysis: First the row totals (Ti.j. TABLE 4. Data for Illustration 4..90 Applied Design of Experiments and Taguchi Methods ˆ Rijk = Yijk  Y ijk (4. Analysis of residuals: model.1 100 23 25 35 36 28 27 48 71 55 174 Temperature (°C) 120 31 32 34 35 27 25 63 69 52 184 140 36 39 31 34 26 24 75 65 50 190 Ti.3 Pressure (MPa) 100 110 120 T. The two factors are investigated at three levels each. indicate that the errors are normally distributed.1 Two-factor Factorial Experiment A chemical engineer has conducted an experiment to study the effect of temperature and pressure on the reaction time of a chemical process.56 . column totals (T.j. ˆ ). 4. if the model is correct and assumptions 3..21) = Yijk  Yij .).. The following graphical plots are obtained to check the adequacy of the 1. N = (548) 2 18 = 16683. Computation of sum of squares: Grand total (T…) = 548 Total number of observations (N = abn) = 18 Correction factor (CF) = 2 T. indicate that the errors are independent. 2. Plot of the residuals (Rijk s) on a normal probability paper. if the plot resembles a straight line.. Plot of residuals versus the factors (Ai and Bj).3. Plot of residuals versus order of experimentation. Plot of residuals versus fitted values (Rijk vs Y ijk are valid. if the residuals are randomly scattered and no pattern is observed.3) are obtained from two replications. the residuals should be structure less. these plots indicate equality of variance of the respective factors. ILLUSTRATION 4. The following data (Table 4.) and the cell totals (the underlined numbers) are obtained as given in Table 4.20) (4. 77 Sum of squares due to the factor temperature is computed using its level totals [Eq..28 Temperature Pressure Temperature ´ Pressure Interaction Error Total .56 = 410.00 – 16900. + (52)2 + (50)2 2 – 16683. (4.100 = 176. (4.8). SSPressure =  a Ti2 .1 (Two-factor experiment) Sum of squares 21.90 The ANOVA for the two-factor experiment (Illustration 4..00 410. bn i 1  CF = (186) 2 + (205)2 + (157)2 6 – 16683.11)}. n i 1 j 1  CF  SSPressure  SSTemperature = (48)2 + (63)2 + .4.90 17. 2  CF SSTotal =    Yijk i 1 j 1 k  1 a b n = {(23)2 + (25)2 + (31)2 + .77 Sum of squares due to the interaction between pressure and temperature is computed using the combined totals/cell totals {Eq.56 = 194. SSTemperature =  b T.89 97. 2 j.1) is given in Table 4.77 194.39 44. + (26)2 + (242)} – CF = 17094...4 Source of variation ANOVA for Illustration 4.73 51..90 F0 5. SSPressure ´ Temperature =   a b Tij2.77 = 17077.44 Degrees of freedom 2 2 4 9 17 Mean squares 10.10)].77 – 21.56 – 194.9)].00 – 16683.77 176.26 23.44 Sum of squares due to the factor pressure is computed using its level totals [Eq. (4.56 = 21. (4.23 1.Multi-factor Factorial Experiments 91 SSTotal is computed from the individual observations using Eq. TABLE 4. an j 1  CF = (174) 2 + (184) 2 + (190) 2 6 – 16683. 05.3 THE THREE-FACTOR FACTORIAL EXPERIMENT The concept of two-factor factorial design can be extended to any number of factors.0 –1. We leave this as an exercise to the reader. 1.5 1. If all the factors are fixed. Suppose we have factor A with a levels.4. Plot of residuals vs temperature 4. In order to obtain experimental error required to test all the main effects and interaction effects. arranged in a factorial experiment. the interaction is also significant. Note that there are 36 possible pairs of means of the 9 cells. In a fixed effects model (all factors are fixed).5 –0. the F-statistic is computed by dividing the mean square of all the effects by the error mean square. where n is the number of replications. factor B with b levels and factor C with c levels and so on.5 –0.63. the following graphical plots should be examined for the model adequacy as already discussed earlier.1. TABLE 4. the cell means have to be compared. (4. the combined treatment means (cell means) are to be compared. we can formulate the appropriate hypotheses and test all the effects.5 0.2. 4. the main effects temperature and pressure and also their interaction effect is significant. The number of degrees of freedom for . Residuals: The residuals are computed using Eq. Hence.5.05.9 = 3. Plot of residuals on a normal probability paper or its standardized values on an ordinary graph sheet ˆ ) 2.0 0.5 gives the residuals for Illustration 4.0 –1. On the other hand if the interaction is significant (irrespective of the status of row and column factor).5 1.5 1. we must have a minimum of two replications. Table 4. Plot of residuals vs pressure Comparison of means: When the ANOVA shows either row factor or column factor as significant. Plot of residuals vs fitted values ( Y ijk 3.1 Temperature (°C) 100 –1.5 0.5 0.5 –0. we compare the means of row or column factor to identify significant differences.1.9 = 4.5 1.26 and F0.21). For comparison any one pair wise comparison method can be used.92 Applied Design of Experiments and Taguchi Methods Since F0. In Illustration 4.5 –1.0 140 –1. This design will have abc …n total number of observations.5 120 –0.0 1.0 Using residuals in Table 4.5 Pressure (MPa) 100 110 120 Residuals for Illustration 4. .. 2. c l = 1.27) Using AB.. B and C at levels a.k ..23) 2 SSTotal =     Yijkl  CF i 1 j 1 k  1 l  1 (4. the two-factor interaction sum of squares is computed. . n  (4... AC and BC combined cell totals. ..25) SSB = j 1  T. acn 2  T..26) SSC = k 1 abn  CF (4. 2. .Multi-factor Factorial Experiments 93 any main effect is the number of levels of the factor minus one...28) a b ... b   k = 1. . 4. 2 j. 2. SSAB = i 1 j 1   Tij2. a SSA = i 1 bcn  CF (4.1 The Statistical Model for a Three-factor Experiment Consider a three-factor factorial experiment with factors A. abcn (4. and the degrees of freedom for an interaction is the product of the degrees of freedom associated with the individual effects of the interaction.. 2  Ti.3. It is customary to represent the factor by a upper case and its levels by its lower case...... b and c respectively.. cn  CF  SSA  SSB (4. 2.24) The sum of squares for the main effects is computed using their respective level totals.22) The analysis of the model is discussed as follows: Computation of sum of squares: Correction factor (CF) = a b c n 2 T. Yijkl = m + Ai + Bj + (AB)ij + Ck + (AC)ik + (BC)jk + (ABC)ijk + e(ijk) l i = 1. c b  CF (4. a   j = 1. .29) SSBC = j 1 k 1 The three-factor interaction sum of squares is computed using the ABC combined cell totals.20 0.2 An industrial engineer has studied the effect of speed (B).20 120 157 277 43 55 65 77 120 Feed (C) 0.15 0.32) Computation of sum of squares: In this example.j.6 Depth of cut (A) 0.20 T. c = 2 and n = 2 . SSABC = i  1 j  1 k 1    Tijk . T.25 99 126 225 499 59 61 82 75 0.31) a b c 2 The error sum of squares is obtained by subtracting the sum of squares of all main and interaction effects from the total sum of squares. n  CF  SS A  SSB  SSC  SSAB  SSAC  SSBC (4. b = 2. SSE = SSTotal – SS of all effects ILLUSTRATION 4. 54 52 86 82 106 168 274 41 58 62 64 Surface roughness data for Illustration 4. bn   T. The surface roughness measurements (microns) from two replications are given in Table 4.30) b c a c  CF  SSA  SSC (4.94 Applied Design of Experiments and Taguchi Methods SSAC = i 1 k 1   Ti2 . feed (C) and depth of cut (A) on the surface finish of a machined component using a three-factor factorial design. All the three factors were studied at two-levels each. we have a = 2. 2 jk .2 Speed (B) 100 Feed (C) 0.6. an  CF  SSB  SSC (4. TABLE 4.k .25 98 142 240 517 1016(T…) Ti… 423 593 (4.jk. 00 = 462.. (4..k . SSA = i 1  Ti2 ..k.27).00 The sum of squares of the main effects is computed from Eqs.25 To compute the two-factor interaction sum of squares..00 = 20. + (77)2 – CF = 67304.00 = 2788. acn  CF b = (499) 2 + (517) 2 8 – CF = 64536.. 2 j.25 – 64516. abcn = (1016) 2 16 = 64516.00 = 1806. bcn  CF a = – CF 8 = 66322.00 The total sum of squares is found from the individual data using Eq. .25 The level totals for factor C(T.. (4. we have to identify the respective two factors combined totals.25 – 64516...25 (423)2 + (593)2 SSB = j 1  T.Multi-factor Factorial Experiments 95 Correction factor (CF) = 2 T.) are: Level 1 totals = 274 + 277 = 551 Level 2 totals = 225 + 240 = 465  T.24).. 2 SSTotal =     Yijkl  CF i  1 j  1 k 1 l 1 a b c n = (54)2 + (52)2 + (41)2 + . 2 c SSC = k 1 abn  CF = (551)2 + (465)2 8 – CF = 64978..25) to (4.25 – 64516.00 – 64516. an  CF  SSB  SSC b c = (274)2 + (225)2 + (277)2 + (240)2 4 – CF – SSB – SSC = 65007.25 = 49. SSAB = i 1 j 1   Tij . bn  CF  SSA  SSC a c = (226)2 + (197)2 + (325)2 + (268)2 4 – CF – SSA – SSC = 66833.96 Applied Design of Experiments and Taguchi Methods Totals for AB interaction sum of squares are found from A1B1 = 106 + 99 = 205 A1B2 = 168 + 126 = 294 A2B1 = 120 + 98 = 218 A2B2 = 157 + 142 = 299 Note that in each AB total there are 4 observations.25 – 20.50 – 64516.25 = 4. Sum of squares for AC interaction is found from Eq. Sum of squares for AB interaction is found from Eq.) are given in Table 4. (4.25 – 462.6.k . cn  CF  SSA  SSB a b 2 = (205)2 + (294)2 + (218)2 + (299)2 4 – CF – SSA – SSB = 66346.00 – 1806.jk.29).00 – 20..00 Totals for AC interaction sum of squares are found from A 1 C1 = A 1 C2 = A 2 C1 = A 2 C2 = 106 + 120 = 226 99 + 98 = 197 168 + 157 = 325 126+ 142 = 268 Note that in each AC total also there are 4 observations.25 – 462.50 – 64516. The sum of squares for AB is computed from Eq. 2 jk .28).25 = 9.00 . SSBC = j 1 k 1   T.00 – 1806. (4.00 The BC interaction totals (Y. SSAC = i 1 k 1   Ti2 .50 – 64516.30). (4. .495 11.199 0.00 At 5% significance level.697 — Sum of squares 1806.00 – 64516.00 – 9.25 462.00 = 327.7).00 2788.1)] is as follows: .25 20.25 – 4.00 – 2461. F0.25 – 20. Consider a two-factor factorial design with factors A and B and n replicates.00 = 110. + (142)2 2 – CF – SSA – SSB – SSC – SSAB – SSAC – SSBC = 66977. (4. The procedure followed in the analysis of three-factor factorial design can be extended to any number of factors and also factors with more than two levels.32.22 2.00 110.25 – 462.00 – 1806.6).2 Degrees of freedom 1 1 1 1 1 1 1 8 15 Mean squares 1806.05. TABLE 4.25 SSe = SSTotal – SS of all effects = 2788. the feed and depth of cut influence the surface finish.25 40..00 110. So. It is computed from Eq.00 9.31).25 462. (4.00 9.00 All these computations are summarized in the ANOVA (Table 4.25 20.00 49.19 0.25 4.4 RANDOMIZED BLOCK FACTORIAL EXPERIMENTS We have already discussed the application of blocking principle in a single-factor experiment.098 1.25 327.25 4. The statistical model for this design is [Eq.8 = 5. However with the increase in problem size manual computation would be difficult and hence a computer software can be used.875 F0 44.1.31 0. Blocking can be incorporated in a factorial design also.7 Source of variation Depth of cut (A) Speed (B) Feed (C) AB AC BC ABC Pure error Total Analysis of variance for Illustration 4.Multi-factor Factorial Experiments 97 The three-factor interaction ABC is found using the ABC cell totals (underlined numbers in Table 4. 4.00 – 49. the main effects depth of cut and feed alone are significant. a b c SSABC = i  1 j  1 k 1 n  CF  SSA  SSB  SSC  SS AB  SS AC  SSBC = (106)2 + (99)2 + (120)2 + . 2    Tijk .00 49. That is. . Similarly if an experiment with two replications could not be completed in one day. Block sum of squares is computed using the block totals (T. b  k = 1.33) where..k  CF k 2  Ti.  CF  SS A  SSB i j (a – 1) (b – 1) By subtraction 2    Yijk  CF i j k abn – 1 ..  CF i Source of variation Blocks Degrees of freedom (n – 1) ab 1 A bn (a – 1) 1 B an 2  T. day represents the block. dk is the effect of kth block. . The analysis of this model is same as that of factorial design except that the error sum of squares is partitioned into block sum of squares and error sum of squares. A typical ANOVA table for a two-factor block factorial design is given in Table 4.98 Applied Design of Experiments and Taguchi Methods Yijk = m + Ai + Bj + (AB)ij + e(ij)k i = 1.8.. Suppose to run this experiment a particular raw material is required that is available in small batches... .. Thus. An alternative design is to run each replicate with each batch of material which is a randomization restriction or blocking. This causes the experimental units to be non-homogeneous.... The model assumes that interaction between blocks and treatments/factors is negligible. we can run one replication on one day and the second replication on another day.. . 2.. .. 2. j . 2. n  The analysis of this design is already discussed earlier. 2. 2. . n  (4.. a   j = 1.. This type of design is called randomized block factorial design..  CF j (b – 1) 1 AB Error Total n 2   Tij .. Then for replication another batch is needed.4..1 The Statistical Model Yijk = m + Ai + Bj + (AB)ij + dk + e( ij)k i = 1. The order of experimentation is random with in the block. 2. in this.k).. 4. b  k = 1. TABLE 4.8 General ANOVA for a two-factor block factorial design Sum of squares 1 2  T. And each batch of material is just enough to run one complete replication only (abn treatments).. a   j = 1.. . .6 5.444 = 2941.3 36. = Grand total).5 22.2 7..1 15.3 Frequency (A j) 4 Load (Bk) 5 10 15 5. Randomized Block Factorial Design A study was conducted to investigate the effect of frequency of lifting (number of lifts/min) and the amount of load (kg) lifted from floor level to a knee level height (51 cm).5)2 + … + (21.8 5. Correction factor (CF) = 2 T.3 8.5 13.9 7.9 WorkersBlock (di) 2 Load (Bk) 5 10 15 1 2 3 4 Tjk.6)2 + (13..Multi-factor Factorial Experiments 99 The Correction factor in Table 4.4 26. Three loads 5.6 13.5 13. A randomized block factorial design was used with load and frequency as factors and workers as blocks.1 145.7 75.2 22.0 24.8 Percentage rise in heart rate data for Illustration 4. The percentage rise in heart rate was used as response.2 4.5 106.3 2 T.9 36.9.236 The sum of squares of main effects is computed using the respective factor level total.. 90.0 14.5 9.9.6 19.. .7 19.4 43.0 16. 10 and 15 kg and three frequencies 2.5 10. Tj.3 17.4 6 Load (Bk) 5 10 15 5.444 2 SSTotal =    Yijk  CF i 1 j 1 k  1 = (5..4 Data analysis: Let factor A = frequency.0 5.0 109. The data are given in Table 4.0 74.9 47.2 95.9 193.2)2 + (8. The procedure consists of lifting a standard container with the given load at a given frequency. factor B = load and the block = d.6 12. 5. abn 4 3 = (472)2 36 3 = 6188.2 8.3 T… = 472.4 3.2 8. 4 and 6 lifts per minute have been studied.3)2 – 6188. abn (T. The data is analyzed using ANOVA.4 4. All the relevant totals required for computing sum of squares is given in Table 4.2 7.5 5.6 92..3 Ti.8 220.0 10..4 29. TABLE 4.8 is ILLUSTRATION 4.0 48.5 27.5 21.2 16. Four workers have been selected and used in the study. 3)2 9  CF = 7027.4 + 109.3.31 454.5 – 6188. ).60 1.4 B10 = 36.8)2 4  CF  SS A  SSB = 7807.078 Table 4.. ANOVA for Illustration 4.4)2 12  CF = 558.483 The interaction between frequency and load (AB) is found using the combined totals (T.5)2 + .85 2941.6)2 + (92.24 F0.3)2 + (36.6 B15 = 47.80 20..0)2 + (220. the level totals are: B5 = 22..3 + 26.8 + 75.5 + 43. SSAB = (22.4. SSBlock = (90.62 909.88 Sum of squares 839.6)2 + (145.2)2 + (95.6 = 85..483 = 151.12 F0 — 13.100 Applied Design of Experiments and Taguchi Methods The sum of squares of A is computed from (Tj.jk.8 = 233.40.616 – 909.24 = 2.4)2 + (153.522 – 6188.444 – 558.3 (RBFD) Degrees of freedom 3 2 2 4 24 35 Mean squares — 279.1 + 74.05.21 482.616 For factor B. frequency and load have got significant effect on the percentage rise in heart rate due to the lifting task. frequency ´ load interaction shows no significance.74 37.). .5 + 36.10 gives the ANOVA for Illustration 4.0 SSB = (85.6)2 + (233.10 Source of variation Workers (Blocks) Frequency (A) Load (B) Frequency ´ Load (AB) Error Total F0.05.88 22.0 = 153.0)2 12  CF = 909.9) 2 + (193. TABLE 4.2.08 558.24 = 3.48 151. However. + (109.) SSA = (106.207 The block sum of squares is found from the row totals (Ti.78 Inference: At 5% level of significance.444 = 839. Multi-factor Factorial Experiments 101 4.5 EXPERIMENTS WITH RANDOM FACTORS In all the experiments discussed so far we have considered only fixed factors (the levels of the factors are fixed or chosen specifically). The model that describes a fixed factor experiment is called a fixed effects model. Here, we consider the treatment effects Tis as fixed constants. The null hypothesis to be tested is H0: Ti = 0 for all i (4.34) And our interest here is to estimate the treatment effects. In these experiments, the conclusions drawn are applicable only to the levels considered in the experimentation. For testing the fixed effect model, note that we have used mean square error (MSe) to determine the F-statistic of all the effects in the model (main effects as well as interaction effects). 4.5.1 Random Effects Model If the levels of the factors are chosen randomly from a population, the model that describes such factors is called a random effects model. In this case the Tis are considered as random variables 2 2 and are NID (0, T T ). T T represents the variance among the Tis. The null hypothesis here is H0: 2 =0 TT (4.35) 2 In these experiments we wish to estimate T T (variance of Tis). Since the levels are chosen randomly, the conclusions drawn from experimentation can be extended to the entire population from which the levels are selected. The main difference between fixed factor experiment and the random factor experiment is that in the later case the denominator (mean square) used to compute the F-statistic in the ANOVA to test any effect has to be determined by deriving the Expected Mean Square (EMS) of the effects. Similarly, EMS has to be derived in the case of mixed effects model (some factors are fixed and some are random) to determine the appropriate F-statistic. Suppose we have a factorial model with two factors A and B. The effects under the three categories of models would be as follows: Fixed effects model A is fixed B is fixed AB is also fixed Random effects model A is random B is random AB is random Mixed effects model A is fixed B is random AB will be random 4.5.2 Determining Expected Mean Squares The experimental data is analyzed using ANOVA. This involves the determination of sum of squares and degrees of freedom of all the effects and finding appropriate test statistic for each effect. This requires the determination of EMS while we deal with random or mixed effects models. For this purpose the following procedure is followed: 1. The error term in the model e ij... m shall be written as e( ij ...) m, where the subscript m denotes the replication. That is, the subscript representing the replication is written outside the parenthesis. This indicates that (ij...) combination replicated m times. 102 Applied Design of Experiments and Taguchi Methods 2. Subscripts within the parentheses are called dead subscripts and the subscript outside the parenthesis is the live subscript. 3. The degrees of freedom for any term in the model is the product of the number of levels associated with each dead subscript and the number of levels minus one associated with each live subscript. For example, the degrees of freedom for (AB)ij is (a – 1) (b – 1) and the degrees of freedom for the term e( ij)k is ab (n – 1). 4. The fixed effect (fixed factor) of any factor A is represented by fA, where G A =  Ti a 1 2 and the random effect (random factor) of A is represented by T A . If an interaction contains at least one random effect, the entire interaction is considered as random. 2 Deriving expected mean square Consider a two-factor fixed effect model with factors A and B. Yijk = m + Ai + Bj + (AB)ij + e( ij)k i = 1, 2, ..., a   j = 1, 2, ..., b  k = 1, 2, ..., n  In this model m is a constant term and all other terms are variable. Step 1: Write down the variable terms as row headings. Ai Bj (AB)ij e(ij)k Step 2: Write the subscripts as column headings. i Ai Bj (AB)ij e(ij)k Step 3: Above the column subscripts, write F if the subscript belongs to a fixed factor and R if the subscript belongs to a random factor. Also, write the number of levels/observations above each subscript. j k Multi-factor Factorial Experiments 103 a F i Ai Bj (AB)ij e(ij)k b F j n R k Note that the replication is always random. Step 4: Fill the subscript matching cells with 0 if the column subscript is fixed and 1 if the column subscript is random and also by 1 if the subscripts are dead. a F i Ai Bj (AB)ij e(ij)k Step 5: 0 1 0 0 0 1 1 b F j n R k Fill in the remaining cells with the number of levels shown above the column headings. a F i Ai Bj (AB)ij e(ij)k 0 a 0 1 b F j b 0 0 1 n R k n n n 1 EMS Step 6: Obtain EMS explained as follows: Consider the row effect Ai and suppress the column(s) corresponding to this subscript (ith column). Take the product of the visible row numbers (bn) and the corresponding effect (fA) and enter in that row under the EMS column which is bn fA. This is repeated for all the effects, wherever this subscript (i ) appears. The next rows where this subscript appears are in (AB)ij and e(ij)k. The EMS terms corresponding to these two terms are zero (0 ´ n) and s e2(variance associated with the error term) respectively. These terms are added to the first term, bn fA. Thus, the EMS of Ai is bn fA + s e2 (zero is omitted). Note that the EMS for the error term is always s e2 . This procedure is repeated for all the effects. Table 4.11 gives the EMS for the two-factor fixed effects model and F-statistic. 104 Applied Design of Experiments and Taguchi Methods TABLE 4.11 Expected mean squares for a two-factor fixed effects model a F i Ai Bj (AB)ij e(ij) k 0 a 0 1 b F j b 0 0 1 n R k n EMS nb fA + 2 e 2 e 2 e F0 MSA/MSe MSB/MSe MSAB/MSe s n na fB + s n n fAB + s 1 s e2 The F-statistic is obtained by forming a ratio of two expected mean squares such that the only term in the numerator that is not in the denominator is the effect to be tested. Thus, the F-statistic (Table 4.11) for all the effects is obtained by dividing the mean square of the effect with the error mean square. This will be the case in all the fixed effect models. Tables 4.12, 4.13 and 4.14 give the expected mean squares for a two-factor random effects model, a two-factor mixed effects model and a three-factor random effects model respectively. TABLE 4.12 Expected mean squares for a two-factor random effects model a R i Ai Bj (AB)ij e(ij) k TABLE 4.13 1 a 1 1 b R j b 1 1 1 n R k n n n 1 EMS nb s + n s 2 A F0 2 AB 2 e s 2 2 na s 2 B + n s AB s e 2 n s AB + s e2 s e2 MSA/ MSAB MSB/ MSAB MSAB/MSe Expected mean squares for a two-factor mixed effects model a F i Ai Bj (AB)ij e(ij) k 0 a 0 1 b R j b 1 1 1 n R k n n n 1 EMS 2 nb fA + ns AB + 2 sB + s e2 2 n s AB + s e2 s e2 F0 s e2 MSA/MSAB MSB/MSe MSAB/ MSe na Multi-factor Factorial Experiments 105 TABLE 4.14 Expected mean squares for a three-factor random effects model a R i b R j b 1 b 1 1 b 1 1 c R k c c 1 c 1 1 1 1 n R l n n n n n n n 1 bcn s + cn s acn s 2 A 2 B 2 C EMS + cn s 2 AB 2 AB 2 BC 2 2 + bn sAC + ns ABC + 2 + an sBC Ai Bj Ck (AB)ij (AC)ik (BC)jk (ABC)ijk e(ijk) l 1 a a 1 a 1 1 1 2 cn s2 AB + n sABC + 2 bn s2 AC + n sABC 2 an s2 BC + n sABC 2 n sABC + abn s + an s s e2 s e2 + s e2 + s e2 + bn s2 AC s e2 2 + ns ABC + s e2 2 + n sABC + s e2 s e2 Note that when all the three factors are random, the main effects cannot be tested. 4.5.3 The Approximate F-test In factorial designs with three or more factors involving random or mixed effects model and certain complex designs, we may not be able to find exact test statistic for certain effects in the model. These may include main effects also. Under these circumstances we may adopt one of the following approaches. 1. Assume certain interactions are negligible. This may facilitate the testing of those effects for which exact statistic is not available. For example, in a three-factor random effects model (Table 4.14), if we assume interaction AC as zero, the main effects A and C can be tested. Unless some prior knowledge on these interactions is available, the assumption may lead to wrong conclusions. 2. The second approach is to test the interactions first and set the insignificant interaction (s) to zero. Then the other effects are tested assuming the insignificant interactions as zero. 3. Another approach is to pool the sum of squares of some effects to obtain more degrees of freedom to the error term that enable to test other effects. It is suggested to pool the sum of squares of those effects for which the F-statistic is not significant at a large value of a (0.25). It is also recommended to pool only when the error sum of squares has less than six degrees of freedom. 4.6 RULES FOR DERIVING DEGREES OF FREEDOM AND SUM OF SQUARES To determine the degrees of freedom and sum of squares for any term in the statical model of the experiment, we use the following rules. 4.6.1 Rule for Degrees of Freedom Number of degrees of freedom for any term in the model = (Number of levels of each dead subscript) ´ (Number of levels – 1) of each live subscript (4.36) 106 Applied Design of Experiments and Taguchi Methods Suppose we have the following model:  i = 1, 2, ..., a  j = 1, 2, ..., b    k = 1, 2, ..., c   l = 1, 2, ..., n Yijkl = m + Ai + Bj + (AB)ij + Ck(j) + (AC)ik(j) + e( ijk)l (4.37) Degrees of freedom for AC interaction = b (a – 1) (c – 1) Similarly, the error degrees of freedom = abc (n – 1) (4.38) (4.39) 4.6.2 Rule for Computing Sum of Squares Suppose we want to compute sum of squares for the term ACik(j). Define 1 = CF (Correction factor) Write the degrees of freedom (df) for AC and expand algebraically df for AC = b(a – 1) (c – 1) On expansion, we obtain abc – bc – ab + b In Eq. (4.40), if 1 is present, it will be the correction factor. Now we write the sum of squares as follows using (.) notation. SSAC =    i j k 2 Tijk . (4.40) n   j k T. 2 jk . an   i j Tij2.. cn + j T. 2 j .. acn (4.41) The first term in Eq. (4.40) is abc. So to write SS for this term, triple summation over ijk 2 corresponding to the levels of abc is written and its total ( Tijk . ) is written keeping dot (.) in the place of subscript l. This subscript belongs to the replication (n) and it becomes the denominator for the first term of Eq. (4.41). Similarly, all other terms are obtained. PROBLEMS 4.1 A study was conducted to investigate the effect of feed and cutting speed on the power consumption (W) of a drilling operation. The data is given in Table 4.15. TABLE 4.15 Cutting speed (m/min) 30 40 50 Data for Problem 4.1 Feed (mm/rev) 0.10 26.35 26.30 35.10 35.15 39.05 38.90 0.05 17.40 17.50 23.60 23.40 29.30 29.25 0.20 38.00 38.20 54.60 54.85 58.50 58.30 Analyse the data and draw the conclusions. Use a= 0.05. Multi-factor Factorial Experiments 107 4.2 An experiment was conducted to study the effect of type of tool and depth of cut on the power consumption. Data obtained are given in Table 4.16. TABLE 4.16 Type of tool (m/min) 1 2 Data for Problem 4.2 Depth of cut (mm/rev) 2.0 3.0 11.1 11.3 12.1 12.3 17.0 17.3 18.3 18.1 0.05. 1.0 5.0 4.8 5.2 5.4 Analyse the data and draw the conclusions. Use a= 4.3 The bonding strength of an adhesive has been studied as a function of temperature and pressure. Data obtained are given in Table 4.17. TABLE 4.17 Pressure 30 120 130 140 90 74 150 159 138 168 Data for Problem 4.3 Temperature 60 34 80 106 115 110 160 90 58 30 35 70 96 104 (a) Analyse the data and draw conclusions. Use a = 0.05. (b) Prepare appropriate residual plots and comment on the model adequacy. (c) Under what conditions the process should be operated. Why? 4.4 A study was conducted to investigate the effect of frequency and load lifted from knee to waist level. Three frequencies (number of lifts per minute) and three loads of lift (kg) have been studied. Four subjects (workers) involved in manual lifting operation were studied. The percentage of rise in heart rate measured for each subject (worker) is given in Table 4.18. 74 3.96 4.80 6. Use a= 0.59 15.98 16.77 15.86 15 15.01 Analyse the data and draw the conclusions.63 Load 10 10.5 A study was conducted to investigate the effect of type of container on the heart rate of manual materials lifting workers.15 17.28 4.89 22.26 7.38 24.52 9.49 11.23 Load 10 10.53 6 15 16.03 13.99 37.61 4.05.78 9.52 2.76 27.88 5.64 3.41 3.44 Data for Problem 4.19.87 3.63 25.92 25.06 9.88 10. Two types of containers (rectangular box and bucket) have been used in the study to lift 10 kg.19 Subject (Block) 10 1 2 3 4 76 76 76 76 92 92 84 88 Data for Problem 4.99 6. Use block factorial design.56 12.05. Five workers were selected randomly and collected data (heart rate) which is given in Table 4.26 14.47 4.5 Type of container Rectangular box Load 15 72 72 84 84 100 104 88 90 20 88 88 100 100 104 100 92 94 10 68 68 88 92 80 76 80 80 Bucket Load 15 80 72 88 88 92 88 72 72 20 68 68 76 80 80 84 80 82 Analyse the data and draw the conclusions.11 11.91 15.64 4.4 Frequency 4 2 Load 10 4. a = 0.49 24.86 19. The order of experimentation was random with in each block. 15 kg and 20 kg of load from floor to 51 cm height level.68 3.51 4.86 5 4.79 7.18 39.55 15 5.15 9.24 20.108 Applied Design of Experiments and Taguchi Methods TABLE 4.49 3.91 11.29 1. .30 3.10 20.02 5.51 40.23 21.54 9. TABLE 4.23 26.92 43.63 12.95 13. Note that it is a randomized 4.43 7.15 30.26 14.95 2.02 13.98 27.18 Subject (Block) 5 1 2 3 4 2.27 4.92 10.73 5 4. 21 Fixture 1 Operator O1 O2 O3 1 2 3 4 5.40 2.2 26.7 12.10 7.6 10.2 3.9 19.0 13.2 5.9 16.0 16.78 9.2 22.51 4.35 7.99 PCS 3. The colour difference obtained from each method was measured (Table 4.20).3 25.7 An industrial engineer has conducted a study to investigate the effect of type of fixture.2 21.1 11.8 3 Operator O2 O3 9.7 17.7 Type of layout 2 Operator O2 O3 10.4 4.0 4.80 3.08 3.8 27.2 11.5 8.3 12.20 Method Data for Problem 4.9 13. Analyse the data and draw conclusions.8 10.83 2.5 13.25 9.4 Data for Problem 4.6 16.30 7.6 Three different methods of gray component replacement in a printing process have been studied.65 Xerox Paper Coated 9.41 7. Use a = 0.55 7.7 15.61 3.6 14.20 4.52 9.2 19.3 7.2 7.7 24.9 25.85 9.2 32. A test colour patch in two different types of paper was printed using two different types of digital printing machines.6 7.0 16. TABLE 4.9 11.4 23.43 7.5 11.9 13. The operators were selected randomly.6 6.3 7. The assembling time data obtained is given in Table 4.0 27.05.Multi-factor Factorial Experiments 109 4.77 Uncoated 16.5 12.26 15.5 17. operator and layout of components on the assembly time of a product.0 O1 3.4 4.9 27.05 9.2 26.50 4.71 3.5 36.10 3.39 Device link Dynamic device link 4.9 .50 15. TABLE 4.1 5.05 2.6 18.7 16.50 17.2 4.9 14.4 9.6 27.7 9.9 30.7 8. Analyse the data and draw conclusions.7 9.21.7 7.4 9.9 5.4 O1 4.9 11.2 9.5 30.5 12.6 Type of printing machine HP Paper Coated Uncoated 7.87 2.85 2.01 15.81 16.2 17.4 5.41 7.40 7.46 2. C H A P T E R 5 The 2k Factorial Experiments 5. two operators. When several factors are to be studied. the high or low levels of a factor. These designs are usually used as factor screening experiments. When several factors are involved. Since there are only two levels for each factor. A factorial design with k -factors.2 THE 22 FACTORIAL DESIGN The simplest design in the 2k series is a 22 design. · The usual normality assumption holds good. Qualitative: two machines. Generally. especially at the early stages of process/product development. each at only two levels is called 2k design. Thus. Quantitative: two values of temperature. Let us study this design with an example.1 INTRODUCTION Factorial designs with several factors are common in research. With these few significant factors. pressure or time etc. the number of levels of these factors is to be limited. full factorial experiment with more levels can then be studied to determine the optimal levels for the factors. we assume that the response is approximately linear over the levels of the factors. etc. The levels of the factors may be quantitative or qualitative. the following assumptions are made in the 2k factorial designs. It provides the smallest number of treatment combinations with which k-factors can be studied in a full factorial experiment. two factors each at two levels. otherwise the number of experiments would be very large. · The factors are fixed. 5. all of them may not affect the process or product performance. · The designs are completely randomized. 110 . several factors with two levels each will minimize the number of experiments. The two levels of the factors are usually referred to as low and high. a special case of multi-factor factorial experiment. And each factor with a minimum of two levels has to be studied in order to obtain the effect of a factor. The 2k design assists the experimenter to identify the few significant factors among them. +) and (1.1 The 22 Factorial Experiment Consider the problem of studying the effect of temperature and pressure on the yield of a chemical process. (–. Hence. B low high. . Thus. Under (0. the effect of a factor is denoted by a capital letter and the treatment combinations are represented by lower case letters. a different method is adopted for 2k design.1. B high.1. the four treatment combinations are represented as follows: 0 0: Both factors at 1 0: A high. Thus. namely (0. Suppose the following results are obtained from this study. The response (yield) total from the two replications for the four-treatment combinations is shown below. 1) notation. This method becomes cumbersome when the number of factors increases. A refers to the effect of A B refers to the effect of B AB refers to the effect of AB interaction. Treatment combination 1 A A A A low. 2) in these 2k designs. B high: low level. B high high. we have three notations. Pressure (B) Temperature (A) Low (60°C) Low (100 MPa) High (150 MPa) 40 37 59 54 High (70°C) 43 50 37 43 This data can be analysed using the method similar to that of the two-factor factorial experiment discussed in Chapter 4.The 2k Factorial Experiments 111 ILLUSTRATION 5. Let the two factors temperature and pressure are denoted by A and B respectively. denoted by (1) denoted by a denoted by b denoted by ab These four treatment combinations can be represented by the coordinates of the vertices of the square as shown in Figure 5. the low level is represented by 0 or – or 1. 0 1: A low. B low low. For any factor. The number of replications is two. B high 40 43 59 37 Replicate 2 37 50 54 43 Grand total = Total 77 93 113 80 363 The treatment combinations with the yield data are shown graphically in Figure 5. 11: A high. 1). which is discussed below. And the high level of the factor is represented by 1 or + or 2. By convention. B low. 1 Graphical representation of 22 design. In Figure 5. a. b and ab represent the response total of all the n replicates of the treatment combinations. (1).2) Interaction effect (AB): Interaction effect AB is the average difference between the effect of A at the high level of B and the effect of A at the low level of B.112 Applied Design of Experiments and Taguchi Methods High (150) 1 b = 113 ab = 80 Pressure (B) 0 Low (100) (1) = 77 a = 93 Low (60) Temperature (A) High (70) FIGURE 5. . the average effect of a factor is defined as the change in response produced due to change in the level of that factor averaged over the other factor.1.2. 5. Main effect of A: Effect of A at the low level of factor B = [a  (1)] n (ab  b) Effect of A at the high level of factor B = n Average effect of A = 1/2n [(a – 1) + (ab – b)] = 1/2n [ab + a – b – (1)] Main effect of B: Effect of B at the low level of A = (5.1 Determining the Factor Effects In a two-level design.1) [b  (1)] n (ab  a) Effect of B at the high level of A = n Average effect of B = 1/2n [b – (1) + (ab – a)] = 1/2n [ab + b – a – (1)] (5. 1) or (5.25 Effect of B = 23/4 = 5. With respect to factor B.1. (5. We may now obtain the value of the contrasts by substituting the relevant data from Figure 5. the magnitude as well as the direction of the factor effects indicate which variables are important. In these 2k designs. Compared to main effects the interaction effect is higher and negative. This indicates that the behaviour of response is different at the two levels of the other factor.25 Observe that the effect of A is negative.3) AB = 1/2n [(ab – b) – {a – (1)}] = 1/2n [ ab + (1) – a – b] We can also obtain interaction effect AB as the average difference between the effect of B at the high level of A and the effect of B at the low level of A.2) and (5.The 2k Factorial Experiments 113 (5. we can compute sum of squares for any effect (SSeffect ) using the contrast. (5. since k = 2 which is same as Eqs. indicating that increasing the factor from low level (60°C) to high level (70°C) will decrease the yield. AB = 1/2n [(ab – a) – {b – (1)}] = 1/2n [ab + (1) – a – b] (5.3) are the contrasts used to estimate the effects. The effect of B is positive suggesting that increasing B from low level to high level will increase the yield.1). the contrasts used to estimate the main and interaction effects are: Contrast A (CA) = ab + a – b – (1) Contrast B (CB) = ab + b – a – (1) Contrast AB (CAB) = ab + (1) – a – b Note that these three contrasts are also orthogonal.3) Therefore.4) which is same as Eq. That is. increasing A from low level to high level results in increase of yield at low level of factor B and decrease of yield at high level of factor B. This can be confirmed by analysis of variance. Thus.75 Effect of AB = –49/4 = –12.2) or (5. Average effect of any factor = contrast n 2k 1 = contrast 2n . CA = 80 + 93 – 113 – 77 = –17 CB = 80 + 113 – 93 – 77 = 23 CAB = 80 + 77 – 93 – 113 = –49 The average effects are estimated by using the following expression. (5. (5. Observe that the quantities in the parenthesis of Eqs. . the result is opposite.3). As already discussed in Chapter 3. Effect of A = –17/2 (2) = –4. 875 F0 2.43 4. + (43)2  (363) 2 8 = 16933.125 – 66..500 The analysis of variance is summarized below (Table 5. TABLE 5.125 SSB = = 2 n ( 49)2 2 4 = 300. Computation of sum of squares SSA = (C A )2 n2 k = (17)2 2 4 (23)2 2  4 = 36. SSeffect = (contrast)2 n2k (5..000 – 16471.125 – 300. .875 Since F0.71.125 = 59.125 66.125 300.125 14. only the interaction is significant and main effects are not significant.6) where n is the number of replications.125 59.05.7) = (40) 2 + (37)2 + (43) 2 + (50)2 + .1 Degrees of freedom 1 1 1 4 7 Mean square 36.875 – 36.125 = 461.5 461.18 Sum of squares 36.125 2 SSTotal =    Yijk  CF i 1 j 1 k  1 (5.125 300.125 SSB = (CB )2 n2k (C AB )2 n2k 2 = = 66.45 20.114 Applied Design of Experiments and Taguchi Methods SSeffect = (contrast)2 n 6ci2 (5.1.875 SSe = SSTotal – SSA – SSB – SSAB = 461.1 Source of variation Temperature (A) Pressure (B) Interaction AB Error Total ANOVA for Illustration 5.4 = 7.125 66.1).5) where ci is the coefficient of the ith treatment present in the contrast In 2k design. Using this order.1. From these contrasts sum of squares are computed as explained before. To find the contrast for estimating any effect. For example. (5. the regression model is Y = b0 + b1X1 + b2X2 + b3X1X2 + e where Xi = coded variable ÿbs = regression coefficients (5. To fill up the table. the contrast coefficients used in estimating the effects above are as follows: Effect A B AB (1) –1 –1 +1 a +1 –1 –1 b –1 +1 –1 ab +1 +1 +1 Note that the contrast coefficients for AB are the product of the coefficients of A and B. signs under AB are obtained by multiplying the respective signs under A and B.3 The Regression Model The regression approach is very useful when the factors are quantitative.2 by the corresponding treatment combination and add. we can develop a table of plus and minus signs (Table 5. For Illustration 5. simply multiply the signs in the appropriate column of Table 5. under effect A enter plus (+) sign against all treatment combinations containing a. The model can be used to predict response at the intermittent levels of the factors.2. to estimate A.2 Treatment combination (1) a b ab Algebraic signs to find contrasts Factorial effect A B AB – + – + – – + + + – – + I + + + + Note that the column headings A. Now. other contrasts can be obtained.1). Similarly under B.8) . TABLE 5. ab referred to as standard order.2) that can be used to find the contrasts for estimating any effect. a. Similarly.2 Development of Contrast Coefficients It is often convenient to write the treatment combinations in the order (1).2. the contrast is –(1) + a – b + ab which is same as Eq. The treatment combinations should be written in the standard order. B and AB are the main and interaction factorial effects. enter plus (+) against treatment combinations containing b and minus (–) sign against others.The 2k Factorial Experiments 115 5. 5. b. Against other treatment combinations enter minus (–) sign. I stands for the identity column with all the +ve signs under it. The contrast coefficients are always either +1 or –1. 10) where. X 1 = +1 and when temperature is at low level X1 = –1.16) .11) X2 = Pressure  125 25 (5. The complete regression model for predicting yield is ˆ =b + bX +bX + bXX Y 0 1 1 2 2 3 1 2 (5.14) Other coefficients are estimated as one-half of their respective effects. The relationship between the natural and coded variables for Illustration 5.116 Applied Design of Experiments and Taguchi Methods X1 = factor A (temperature) and X2 = factor B (pressure) The relationship between the natural variables (temperature and the pressure) and the coded variables is A  X1 = (AH + AL ) (5. ˆ = C 1 ˆ = C 2 Effect of A 2 Effect of B 2 (5. A = temperature and B = pressure. X2 will be +1 when pressure is at high level and –1 when pressure is at low level.13) b0 is estimated by Grand average = Grand total N (5. Similarly.15) (5. the levels of the coded variable will be ±1. AH and AL = high and low level values of factor A BH and BL = high and low level values of factor B. When the natural variables have only two levels.12) When temperature is at high level (70°C).1 is X1 = Temperature  (70 + 60)/2 (70  60)/2 Pressure  (150 + 100)/2 (150  100)/2 = = Temperature  65 5 (5.9) 2 (AH  AL ) 2 (BH + BL ) B  X2 = 2 (BH  BL ) 2 (5. 125 X1X2 Y In terms of natural variables.5 R6 = 54 – 56.5 R2 = 37 – 38. For example.0 .5 R4 = 50 – 46.5 = –2.375  6. the residuals at other treatment combinations can be obtained. (5.25 2 . b2 = 5. when both temperature and pressure are at low level.5 = –3.875 X2 – 6.375 – 2. X1 = –1 and X2 = –1.375. Equation (5.19) can be used to construct the contour plot and response surface using any statistical software package. the model is ˆ = 45. Note that only the significant effect is included in Eq. C 0 8 ÿÿ b1 = 4.5 At X1 = +1 and X2 = +1.5 Y R3 = 43 – 46.1 is ˆ = 45.125 (–1) (–1) = 38. the predictive model for Illustration 5.875 (–1) – 6.125  (Temperature  65)   (Pressure  125)  Y     5 25     (5.125 (–1) + 2.5 = 2.125 X1 + 2.19) Equation (5.75 2 .5 Y R5 = 59 – 56.The 2k Factorial Experiments 117 (5.5 = 3. ˆ = 56.17) ˆ = C 3 Effect of AB 2 ˆ = 363 = 45.0 R8 = 43 – 40 = 3. ˆ = 45.5 Y There are two observations of this treatment combination and hence the two residuals are R1 = 40 – 38.25 2 Hence. Therefore.5 Similarly.19).5 When X1 = –1 and X2 = +1. the observations are 43 and 50 respectively ˆ = 46.5 = 1. X1 = +1 and X2 = –1.375 – 2.18) (5. ˆ = 40 Y R7 = 37 – 40 = –3.18) can be used to compute the residuals.5 = –1. and b3 =  12. 1.3 Plot of residuals vs.1.5 show the normal probability plot of residuals. 99 95 90 Per cent (%) 80 70 60 50 40 30 20 10 5 1 –8 –6 –4 –2 0 Residual 2 4 6 8 FIGURE 5.2 to 5. fitted values for Illustration 5. plot of residuals vs fitted value.1. 44 48 Fitted value 52 56 FIGURE 5.2 4 3 2 1 Residual 0 –1 –2 –3 –4 40 Normal probability plot of residuals for Illustration 5. main effects plot and interaction effect plot respectively for Illustration 5. . Figures 5.118 Applied Design of Experiments and Taguchi Methods These residuals can be analysed as discussed earlier in Chapter 3. 4 Plot of main effects for Illustration 5. 56 52 Mean 48 44 40 100 Pressure (B) 150 FIGURE 5. we have .1.19) can be used to construct the contour plot and response surface using any computer software package.The 2k Factorial Experiments Temperature (A) 49 48 47 46 45 44 43 42 60 70 100 150 Pressure (B) 119 Mean FIGURE 5.1. Since interaction is significant.1.6 shows the contour plot of the response and Figure 5. Figure 5. Equation (5.7 shows the surface plot of response for Illustration 5.5 Plot of Interaction effect for Illustration 5. 55 Response 50 45 140 40 60 65 Temperature (A) 70 100 120 Pressure (B) FIGURE 5.120 Applied Design of Experiments and Taguchi Methods FIGURE 5.7 Surface plot of response for Illustration 5.1.1.6 Contour plot of response for Illustration 5. the response surface will be a plane and the contour plot contain parallel straight lines. curved contour lines of constant response and twisted three dimensional response surface. If the interaction is not significant. . · The product of any two columns modulus 2 yields a column in the table.3) by which the contrasts can be formed for further analysis as in the case of 22 designs. If k > 4.20) Sum of squares of any factor/effect = where n is the number of replications.3.The 2k Factorial Experiments 121 5. In this case we have 23 = 8 treatment combinations. That is. (1).3 THE 23 FACTORIAL DESIGN Suppose we have three factors A.1 Development of Contrast Coefficient Table We can construct a plus–minus table (Table 5. (contrast)2 n * 2k = C2 (8 * n) (5. The analysis of 23 designs is explained in the following sections. ab.3 Treatment combination (1) a b ab c ac bc abc Algebraic signs to find contrasts for a 23 design Factorial effect AB C + – – + + – – + – – – – + + + + I + + + + + + + + A – + – + – + – + B – – + + – – + + AC + – + – – + – + BC + + – – – – + + ABC – + + – + – – + Table 5. B * C = BC and BC * B = B2 C = C All these properties indicate the orthogonality of the contrasts used to estimate the effects. shown below in the standard order. use of computer software is preferable.3 has the following properties: · Every column has got equal number of plus and minus signs except for column I. TABLE 5. · Column I is multiplied by any other column leaves that column unchanged. bc. b. For example. I is an identity element. This design is called 23 factorial design.21) . Average effect of any factor or effect = contrast (n * 2 k 1 ) = C (4 * n) (5. c. ac. a. 2 5. each at two levels. abc For analysing the data from 2k design up to k £ 4. · Sum of product of signs in any two columns is zero. usually we follow the procedure discussed in 2 designs. B and C. 20 Table 5.2 is summarized in Table 5. The order of experiment was random. TABLE 5. Each factor was studied at two levels and obtained two replications. speed (B) and feed (C) on the surface finish of a machined part.10 Speed (rpm) (B) 1000 54 73 (1) = 127 86 66 c = 152 1200 41 51 b = 92 63 65 bc = 128 2 Speed (rpm) (B) 1000 60 53 a = 113 82 73 ac = 155 1200 43 49 ab = 92 66 65 abc = 131 0.4 Surface roughness measurements Tool type (A) 1 Feed (mm/rev) (C ) 0. The results obtained are given in Table 5. The contrasts are obtained from the plus–minus table (Table 5.4 also gives the respective treatment totals. The response total for each treatment combination for Illustration 5. the method of contrasts is followed.2 A 23 Design: Surface Roughness Experiment An experiment was conducted to study the effect of tool type (A).5 Treatment combination Response total (1) 127 Response totals for Illustration 5.2 a 113 b 92 ab 92 c 152 ac 155 bc 128 abc 131 To analyse the data using the ANOVA.3).4.122 Applied Design of Experiments and Taguchi Methods ILLUSTRATION 5.5. Evaluation of contrasts CA = –(1) + a – b + ab – c + ac – bc + abc = –127 + 113 – 92 + 92 – 152 + 155 – 128 + 131 = –8 CB = –(1) – a + b + ab – c – ac + bc + abc = –127 – 113 + 92 + 92 – 152 – 155 + 128 + 131 = –104 CC = –(1) – a – b – ab + c + ac + bc + abc = –127 – 113 – 92 – 92 + 152 + 155 + 128 + 131 = 142 CAB = +(1) – a – b + ab + c – ac – bc + abc = 127 – 113 – 92 + 92 + 152 – 155 – 128 + 131 = 14 CAC = +(1) – a + b – ab – c + ac – bc + abc = 127 – 113 + 92 – 92 – 152 + 155 – 128 + 131 = 20 CBC = +(1) + a – b – ab – c – ac + bc + abc = 127 + 113 – 92 – 92 – 152 – 155 + 128 + 131 = 8 . TABLE 5. 25 64. C = 142/8 = 17.75 = 516.75 SSe = SSTotal – (SSA – SSB – SSC – SSAB – SSAC – SSBC – SSABC) = 2509.25.54 0. SSAC = (20)2 /16 = 25.00 The computations are summarized in the ANOVA Table 5.75.00 12.49 1.00 4.50 F0 0.00 1260.25 25.25 The total sum of squares is computed as usual.93 50.00 12. + (66)2 + (65)2  (990)2 16 = 63766 – 61256.25 12.25 = 2509.75 ** Contribution = Degrees of freedom 1 1 1 1 1 1 1 8 15 Factor sum of squares Total sum of squares Mean square 4.00. SSBC = (8)2/16 = 4.00.75 Computation of sum of squares Factor sum of squares = 2 (contrast)2 (n * 2k ) = (C )2 (2 * 8) SSA = –8 /16 = 4.00 676.6 TABLE 5..25. SSc = (142)2/16 = 1260.00.16 26.00 4. SSTotal = (54)2 + (73)2 + (41)2 + . AC = 20/8 = 2.06* 10. ABC = –14/8 = –1.48 19.16 0.06* 0. SSB = (–104)2/16 = 676.6 Source of variation Type of tool (A) Speed (B) Feed (C) AB AC BC ABC Pure error Total * Insignificant Analysis of variance for Illustration 5.00 1260.25 516.56 100.25 25.19* — Contribution** (%) 0.00 2509.00 0.75.The 2k Factorial Experiments 123 CABC = –(1) + a + b – ab + c – ac – bc + abc = –127 + 113 + 92 – 92 + 152 – 155 – 128 + 131 = –14 Computation of effects Factor effect = contrast (n * 2 k 1 ) = contrast (2 * 4) A = –8/8 = –1. AB = 14/8 = 1. B = –104/8 = –13.00.19* 0.39* 0.49 20. SSABC = (–14)2/16 = 12.2 (23 Design) Sum of squares 4.00.5.25 12.75 – 1993.00..00 – 100 .00 676. BC = 8/8 = 1. SSAB = (14)2/16 = 12.21 0. Label the column that follow the kth column as an effect column (indicates the effects) The last but one column gives the effect estimate The last column gives the sum of squares Table 5.75 17. Fill up the second column of the table with the response total (sum of replications of each experiment if more than one replication is taken) Label the next k columns as 1. 2.00 12. ….00 –1. 5. Let us apply this algorithm to the 23 design illustration on surface roughness experiment (Table 5.4). we can suspect that some factors which might have influence on the response are not studied.7 Treatment combinations (1) a b ab c ac bc abc Yates algorithm for the surface roughness experiment (1) (2) (3) Effect Estimate of effect (3) ÷ n2k–1 — –1. Type of tool and all interactions have no effect on surface finish.2 note that the factors speed (B) and feed (C) together account for about 77% of the total variability. Procedure: Step 1: Step 2: Step 3: Step 4: Step 5: Step 6: The following steps discuss Yates algorithm: Create a table (Table 5. k equal to the number of factors studied in the experiment.00 12.25 1260. According to experts in this field.3. if the error contribution is more than 15%.75 2.00 4. And thus there will be future scope for improving the results.25 Response total 127 113 92 92 152 155 128 131 240 184 307 259 –14 0 3 3 424 566 –14 6 –56 – 48 14 0 990 –8 – 104 14 142 20 8 –14 I A B AB C AC BC ABC .75 Sum of squares (3)2 ÷ n2k — 4. TABLE 5.00 676.7 gives the implementation of Yates algorithm for Illustration 5.25 25.7) with the first column containing the treatment combinations arranged in the standard order. Also note that the error contributes to about 20%. Percentage contribution is an appropriate measure of the relative importance of each term in the model. If the error contribution is significantly less.2.00 –13. The column Contribution in the ANOVA table indicates the percentage contribution of each term in the model to the total sum of squares.2 Yates Algorithm for the 2k Design Yates (1937) has developed an algorithm for estimating the factor effects and sum of squares in a 2k design.00 1. only the speed (B) and feed (C) significantly influence the surface finish.124 Applied Design of Experiments and Taguchi Methods At 5% significance level. the scope for further improvement of the performance of the process or product would be marginal. In Illustration 5.50 1. 7: Treatment combinations: Write in the standard order. Response: Enter the corresponding treatment combination totals (if one replication.7 are same as those obtained earlier in Illustration 5. This algorithm is applicable only to 2k designs. Note that the first entry in column (3) is the grand total and all others are the corresponding contrast totals. (92 + 92 = 184). Hence. it may lead to either over estimation or under estimation. 5. for predicting only the significant effects are considered.3. So the model for predicting the surface finish is ˆ = b0 + b2X2 + b3X3 Y (5.4 STATISTICAL ANALYSIS OF THE MODEL SSmodel is the summation of all the treatment sum of squares which is given as: . Column (2): Use values in column (1) and obtain values in column (2) similar to column (1) Column (3): Use values in column (2) and obtain column (3) following the same procedure as in Column (1) or Column (2).75  = 61. that value is entered) Column (1): Entries in upper half is obtained by adding the responses of adjacent pairs.The 2k Factorial Experiments 125 The following are the explanation for the entries in Table 5.22) where b0 is overall mean. All other coefficients (bs) are estimated as one half of the respective effects as explained earlier.3 The Regression Model The full (complete) regression model for predicting the surface finish is given by Y = b0 + b1X1 + b2X2 + b3X3 + b4 X1X2 + b5X1X3 + b6X2X3 + b7X1X2X3 (5. (–127 + 113 = –14). Residuals can be obtained as the difference between the observed and predicted values of surface roughness. Entries in lower half is obtained by changing the sign of the first entry in each of the pairs in the response and add the adjacent pairs. Note that the contrast totals.23)  13   17.2. (127 + 113 = 240). The residuals are then analysed as usual.87 +   X2 +   X3 2    2  where the coded variables X2 and X3 represent the factors B and C respectively. (–92 + 92 = 0) etc. effect estimates and the sum of squares given in Table 5. 5. etc. If all terms in the model are used (significant and non-significant) for predicting the response. 42 = F0.75 = 0.75 Revised ANOVA for Illustration 5.24) where PE is the pure error (error due to repetitions. the model is significant.7944 .75/7 516. Note that the sum of squares of the insignificant effects are pooled together as lack of fit and added to the pure sum of squares of error and shown as pooled error sum of squares.2 Degrees of freedom 1 1 13 5 8 15 Mean square 676.75 2509. TABLE 5.7.8.18 a= 5% Significant Significant Not significant Note that the effects are tested with pooled error mean square and the lack of fit is tested with pure error mean square.126 Applied Design of Experiments and Taguchi Methods SSmodel = SSA + SSB + SSC + SSAB + SSAC + SSBC + SSABC = 1993.25 573. Since F0 is greater than F0.12 11.8 = 3.82 64.8 Source of variation Speed (B) Feed (C) Pooled error Lack of fit Pure error Total Sum of squares 676.50 = 4.50 516.00/8 284.50. The hypotheses for testing the regression coefficients are H0: b1 = b2 = b3 = b4 = b5 = b6 = b7 = 0 H1: at least one b ¹ 0.50 64. Other statistics · Coefficient of determination (R2): R2 (full model) = SSmodel/SStotal (5.32 28.05.00 1260.50 57.25 44.50 F0 15.58 0.75 F0 = MSmodel MSe = SSmodel /df model SSPE /df PE (5. This pooled error sum of squares is used to test the effects.25) = 1993. The corresponding degrees of freedom are also pooled and added. also called experimental error) is used to test the hypothesis.7. The revised ANOVA with the significant effects is given in Table 5.8 (5% significance level). F0 = 1993.05.00 1260.00 2509. . 2 Radj = 1  SSPE /df PE SSTotal /df Total (5.27).025 .50 2  23 = 2. · Prediction Error Sum of Squares (PRESS): It is a measure of how well the model will predict the new data. we can obtain confidence intervals for all the coefficients. another measure R2 adj (adjusted R ) is recommended.The 2k Factorial Experiments 127 R2 is a measure of the proportion of total variability explained by the model. The prediction R2 statistic is computed using Eq.025.50) 167. computer software is used to analyse the data which will generate all the statistics. 16 in this case and P = number of model parameters including b0 which equal to 8.6145 R2 adj is a statistic that is adjusted for the size of the model. (5.29).29) where N = total number of runs in the experiment. The problem with this statistic is that it always increases as the factors are added to the model even if these 2 are insignificant factors. (5. Se( C ˆ) = Se (C where. Its value decreases if non-significant terms are added to a model.26) = (1  516/8) (2509. ˆ ) is computed from Eq.N – P S e( C ) £ b £ C + t 0. Therefore.75/15) = (1  64. The software package will also give the standard error (Se) of each regression coefficient and the confidence interval. For Illustration 5.01 N = 16 (including replications) and P = 8. Substituting these values.28) The 95% confidence interval on each regression coefficient is computed from Eq. MSe = pure error mean square and n = number of replications ˆ) = V (C MSe n2 k (5.2 ˆ) = Se (C 64. (5. N – P Se ( C ) (5. It is computed from the prediction errors obtained by predicting the ith data point with a model that includes all observations except the ith one. The standard error of each coefficient.27) Usually.32 = 0. 2 RPred = 1  PRESS SSTotal (5. that is. ˆ – t ˆ ˆ ˆ –C 0.28). the number of factors. A model with a less value of PRESS indicates that the model is likely to be a good predictor. (5.5 THE GENERAL 2k DESIGN The analysis of 23 design can be extended to the case of a 2k factorial design (k factors. Usually we use statistical software for analyzing the model.30). forming plus–minus table is cumbersome and time consuming. For large values of k. ContrastAB … K = (a±1) (b±1) … (k±1) (5. in a 24 design. Eq. the contrast ABD would be ContrastABD = (a – 1) (b – 1) (d – 1) (c + 1) (5. Forming the contrasts: In 2k design with k £ 4. where (1) indicates that all the factors are at low level. in a 23 design. 4. . the contrast AC would be ContrastAC = (a – 1) (c – 1) (b + 1) = abc + ac + b + (1) – ab – bc – a – c As another example. we can estimate the effects and sum of squares as discussed earlier in this chapter.32) (5. Include all main effects and their interactions and perform ANOVA (replication of at least one design point is required to obtain error for testing).128 Applied Design of Experiments and Taguchi Methods 5. In general.31) = – (1) + a + b – ab – c + ac + bc – abc + d – ad – bd + abd + cd – acd – bcd + abcd Once the contrasts are formed. we can determine the contrast for an effect AB … k by expanding the right-hand side of Eq. The sign in each set of parameters is negative if the factor is included in the effect and positive for the factors not included.30) is expanded with 1 being replaced by (1) in the final expression. (5. This gives an idea about the important factors and interactions and in which direction these factors to be adjusted to improve response. 3. it is easier to form the plus–minus table to determine the required contrasts for estimating effects and sum of squares. Usually we remove all the terms which are not significant from the full model. each at two levels) The statistical model for a 2k design will have · k main effects · is the number of combinations of K items taken 2 at a time = two-factor interactions K 2 · K  three-factor interactions … and one k-factor interaction. 2.30) Using ordinary algebra. Interpret the results along with contour and response surface plots. For example. Perform residual analysis to check the adequacy of the model. Estimate the factor effects and examine their signs and magnitude. 3 Procedure 1. 5. 0 We begin the analysis by computing factor effects and sum of squares. Many analysts feel that the half normal plot is easier to interpret especially when there are only a few effect estimates. Table 5. It is suggested to obtain the normal probability plot of the effect estimates.8 2.3 The 24 Design with Single Replicate An experiment was conducted to study the effect of preheating (A).6 THE SINGLE REPLICATE OF THE 2k DESIGN When the number of factors increases. In such cases. pooling higher order interactions is inappropriate. When only one replication is obtained. hardening temperature (B).2 4.0 6. This is a plot of the absolute value of the effect estimated against their cumulative normal probabilities. Especially in the screening experiments conducted to develop new processes or new products. Let us use Yates algorithm for this purpose. Because resources are usually limited.The 2k Factorial Experiments 129 5.0 4.0 5.6 3. some higher order interactions may influence the response. The distortion was measured as the difference in the diameter before and after heat treatment (mm).2 6. the experimenter may restrict the number of replications.6 6.8 3. Occasionally. One approach to analyse the unreplicated factorial is to assume that certain higher order interactions are negligible and pool their sum of square to estimate the error.9 Treatment combination (1) a b ab c ac bc abc Distortion data for Illustration 5. quenching media (C) and quenching time (D) on the distortion produced in a gearwheel.3 3.0 4.8 5. the number of experimental runs would be very large.9. To overcome this problem. Each factor was studied at two levels and only one replication was obtained. Other effects are combined to estimate the error. Those effects which fall away from the straight line are considered as significant effects. the half normal plot of effects can also be used in the place of normal probability plot. A single replicate of a 2k design is also called an unreplicated factorial. the number of factors considered will usually be large since the experimenter has no prior knowledge on the problem. Alternately.6 5. The results are given in Table 5. we will not have the experimental error (pure error) for testing the effects.10 gives the computation of factor effects and sum of squares using Yates algorithm. . The straight line on half normal plot always passes through the origin and close to the fiftieth percentile data value. ILLUSTRATION 5. a different method has been suggested to identify the significant effects.0 Treatment combination d ad bd abd cd acd bcd abcd Response (distortion) 2.3 Response (distortion) 4.2 4. TABLE 5. 10 2.16 –0. Figure 5.76 0.6 3.8 –2.5 1.8 2.77 0.21 –0. When the normal probability is plotted on a half-normal paper.9 shows the halfnormal plot of the effects for Illustration 5.0 –1.61 0. This can be verified by plotting the effects on normal probability paper.9) clearly shows that the largest effect (D) is far away from all other effects indicating its significance.5 – 4. leaving these three effects (D. BC and B).43 1.0 7. One way to resolve this issue is to pool the sum of squares of factors with less contribution into the error term as pooled error and test the other effects. it can be seen that the effects B. Also these are considered as significant on the half-normal plot. Table 5. .34 0.48 1.8 0.38 0.29 0.9. Now.9 1.0 6.23 0.3 41.2 –1.59 –0.8 14.1 –3.13 gives ANOVA with the pooling error.5 –0.2 15.8 –0.27 1.3 –1.130 Applied Design of Experiments and Taguchi Methods TABLE 5. half normal plot of effects can also be used.59 –1.2 –0.6 1.8 18. From Table 5.8 5.5 71.4 0.79 0.0 30.9 1.38 7.4 1.1 3. However. this can be verified from ANOVA.33 2.0 5.3 6.3 3. Those effects which fall along the straight line on the plot are considered as insignificant and they can be combined to obtain pooled error. ABC and BC fall away from the line.6 –1.3 (1) (2) (3) (4) Effect Estimate of Sum of Effect squares (4) ÷ n2k–1 (4)2 ÷ n2k — –0.0 4.2 4.0 –1.4 –1.2 10.6 –1.3.1 6.3 I A B AB C AC BC ABC D AD BD ABD CD ACD BCD ABCD Since only one replication is taken.9 1.2 1. we cannot have experimental error for testing the effects. However.3 3.3 – 4.12.18 0.24 0. Table 5. all the other effects are pooled into the error term to test the effects.15 4.6 11.2 12.2 6. And we can suspect that the effects BC and B also as significant from Figure 5.2 4.8 22. the effect with the largest magnitude (D) falls on the line indicating the draw back of using normal plot in identifying the significant effects.7 –3.5 –1.3 4. The computations required for normal probability plot is given in Table 5.0 4. BC and D together contribute to about 57% of the total variation.6 5.8 shows the normal probability plot of the effects.6 0. The halfnormal plot of effects (Figure 5.0 2.5 1.36 0.5 0.10 Treatment combinations (1) a b ab c ac bc abc d ad bd abd cd acd bcd abcd Response Yates algorithm for the 24 design for Illustration 5.3 – 6.4 0.12 gives the initial ANOVA before pooling along with contribution. it is observed that the effects B.9 4.8 – 4.7 4.50 0.7 –2.54 — 0.6 6. Figure 5.6 1.44 0.56 –0. we get half-normal probability plot.2 4.8 2. From the normal plot of the effects (Figure 5.41 –0.68 1.2 8. In place of normal probability plot.7 3.6 9.0 6.1 3.46 1.2 2.1 5.8).11.2 1.7 –10.8 3.0 1.1 –2.32 0. Effects which fall away from the straight line are considered as significant. 44 0.00 36.67 23.5)/N 3.00 96.21 0.29 0. .67 95 90 80 Per cent 70 60 50 40 30 20 10 5 D BC B ABC 1 –2 –1 0 Effect 1 2 FIGURE 5.33 10.8 Normal probability plot of effects for Illustration 5.11 J 1 2 3 4 5 6 7 8 9 10 11 12 1 14 15 99 Normal probability computations of the effects for Illustration 5.00 16.34 –0.33 90.36 –0.16 0.56 0.33 30.3 Effect D BC CD BD ABD BCD A AC ACD C AB ABCD AD ABC B Estimate of effect –1.67 83.79 –0.00 56.33 50.3.The 2k Factorial Experiments 131 TABLE 5.54 0.67 43.24 0.00 76.76 Normal probability (%) (J – 0.59 –0.33 70.41 –0.59 0.67 63.61 –0. 10 2.30 .43 1.9 Half-normal plot of effects for Illustration 5.32 0.71 3.38 0.50 0.37 34.55 1.46 1.15 21.45 6.23 0.3 (24 Design) Sum of squares 0.83 2.30 5.37 3.12 Source of variation A B AB C AC BC ABC D AD BD ABD CD ACD BCD ABCD Total Analysis of variance for Illustration 5. TABLE 5.0 Absolute effect 1.3.14 6.48 1.33 2.132 99 95 90 85 80 70 60 50 40 30 20 10 0 Applied Design of Experiments and Taguchi Methods D Per cent BC B 0.5 1.0 FIGURE 5.86 6.18 0.06 0.77 0.68 1.5 2.38 7.93 0.66 Degrees of freedom 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Contribution* (%) 1.27 1.46 11.0 0.52 10.12 5. We know that in a 22 design.48 7.33 2.43 Degrees of freedom 1 1 1 12 Mean square 2. certain points are to be replicated which provides protection against curvature from second order model and also facilitate independent estimate of error. the factorial points are (– . we assume linearity in factor effects of 2k design.42 9. (5.33) When interaction is significant.12 = 4. +). For this purpose some center points are added to the 2k design. 0). When curvature is not adequately modelled by Eq.33). Equation (5. –) and (+. this model shows some curvature (twisting of the response surface plane).10 shows 22 design with center points. we fit a first order model Y = C0 +  C j X j +   Cij X i X j + e j 1 i  j K (5. From the center points. Analysis of the design Let YC = average of the nC observations yF = average of the four factorial runs . When factors are quantitative. In order to take curvature into account in 2k designs.3 Sum of squares 2. Our initial claim about B and BC is turned out to be false.96 3. error sum of squares is computed. (–. To these points if nC observations are added at the center point (0.44 At 5% level of significance (F1.33 2.34) where bij represents pure second order or quadratic effect. half-normal plot can be a reliable tool for identifying true significant effects. only factor D is significant as revealed by the half-normal plot. Figure 5. (+.The 2k Factorial Experiments 133 TABLE 5. So.7 ADDITION OF CENTER POINTS TO THE 2K DESIGN Usually. 5. With this design usually one observation at each of the factorial point and nC observations at the center point are obtained.15 9.786 F0 2. +).48 7.13 Source of variation B BC D Pooled error Analysis of variance after pooling for Illustration 5.34) is also called the second order response surface model.42 0. it reduces the number of experiments and at the same time facilitate to obtain experimental error.75). we consider a second order model Y = C0 +  C j X j +   Cij Xi X j +  C jj X 2 j +e j 1 i  j j 1 K K (5. –). quadratic curvature will exist. 0) (–. The error sum of squares is computed from the centre points only where we have replications.134 Applied Design of Experiments and Taguchi Methods (+. where yi is ith data of center point i 1 nc (5. The sum of squares for factorial effects (SSA. SSB and SSAB) are computed as usual using contrasts. +) FIGURE 5.11. he has decided to use a 22 design with a single replicate augmented by three center points as shown in Figure 5. The factors and their levels are as follows.37) 2 =  yi2  nC yC ILLUSTRATION 5. SSe =  (Yi  YC )2 .10 The 2 design with center points. Factor A B Low level 10 20 Center 12. –) (+. 2 If ( yF – YC ) is small. Since the chemist is not sure about the linearity in the effects.4 The 22 Design with Center Points A chemist is studying the effect of two different chemical concentrations (A and B) on the yield produced.5 High level 15 25 .5 22. The sum of squares for quadratic effect is given by SSpure quadratic = nF nC (yF  yC )2 nF + nC has one degree of freedom (5.35) where nF is number of factorial design points. This can be tested with error term. +) (0. If ( yF – YC ) is large.36) (5. then the center points lie on or near the plane passing through the factorial points and there will be no quadratic curvature. –) (–. 11 The 22 design with three center points for Illustration 5.00 SSB = (b + ab  1  a)2 1  22 = (30 + 32  29  31)2 4 (C AB ) 2 n 2k = = 1.5 The sum of squares for the main and interaction effects is calculated using the (C A ) 2 n 2k [a + ab + (  1)  b ]2 1  22 SS A = = = (31 + 32  29  30)2 4 (C B ) 2 n 2k = = 4.2 30. 30.4. (+1. Suppose the data collected from the experiment at the four factorial points and the three center points are as given below: Factorial points (–1.3 30.00 .5 – 20% 29 – 10% Concentration (A) 31 + 15% FIGURE 5.00 SS AB = [ab + (1)  a  b]2 4 = (32 + 29  31  30) 4 = 0. +1) 31 30 32 Center points 30. 30. –1) 29 Data analysis: contrasts.2.The 2k Factorial Experiments 25% + Concentration (B) 30 32 135 30.3. +1) (+1. –1) (–1. 39) In this second order model Eq.14.3333 Substituting the respective values in Eq. (5.4 (22 design with center points) Sum of squares 4.7 — 2.9 42.0962 Degrees of freedom 1 1 1 1 2 Mean squares 4. Whereas in the 22 design with one center point. Y = b0 + b1X1 + b2X2 + b12 X1X2 + e (5.5) 3 = 30. quadratic term is required in the model in which case a second order model has to be used.5 YC = (30. (5. we obtain SSpure quadratic = 0. The interaction effect and the quadratic effects are insignificant.0495 0.0467 5.0000 0.136 Applied Design of Experiments and Taguchi Methods SSpure quadratic = nF nC ( yF  yC )2 nF + nC [Eq.38) It is to be noted that the addition of center points do not affect the effect estimates.39).37)] = 2760. (5.38 – 2760. we have six unknown parameters.0000 1. (5.0000 1.0495 The error sum of squares are computed from the center points and is given by 2 2 SSe =  yi  nC yC [Eq.0234 F0 170.39). Y = b0 + b1X1 + b2X2 + b12X1X2 + b11X12 + b22X22 + e (5.14 Source of variation A (Con. .3333 = 0.0000 0. it is required to use four axial runs in the 22 design with one center point. for this problem the first order regression model is sufficient. TABLE 5.0495 0. In order to obtain solution to this model Eq. 1) B (Con. 2) AB Pure quadratic Error Total ANOVA for Illustration 5.3 + 30.1 At 5% level of significance both factors A and B are significant.0000 0. The central composite design is discussed in Chapter 8. If quadratic effect is significant. we have only five independent runs (experiments).0467 These computations are summarized in Table 5. Hence. This type of design is called a central composite design.35)] yF = (29 + 31 + 30 + 32) 4 = 30. (5.35).2 + 30.0000 0. The 2k Factorial Experiments 137 PROBLEMS 5.1 A study was conducted to determine the effect of temperature and pressure on the yield of a chemical process. A 22 design was used and the following data were obtained (Table 5.15). TABLE 5.15 Pressure 1 1 2 29, 33 24, 22 Data for Problem 5.1 Temperature 2 46, 48 48, 44 (a) Compute the factor and interaction effects. (b) Test the effects using ANOVA and comment on the results. 5.2 An Industrial engineer has conducted a study on the effect of speed (A), Tool type (B) and feed (C) on the surface roughness of machined part. Each factor was studied at two levels and three replications were obtained. The results obtained are given in Table 5.16. TABLE 5.16 Treatment combination (1) a b ab c ac bc abc Data for Problem 5.2 Roughness R2 R3 53 83 58 65 61 75 55 61 73 66 51 65 53 73 49 77 Design A B C – + – + – + – + – – + + – – + + – – – – + + + + R1 55 86 59 75 61 73 50 65 (a) Estimate the factor effects. (b) Analyse the data using Analysis of Variance and comment on the results. 5.3 Suppose the experimenter has conducted only the first replication (R1) in Problem 5.2. Estimate the factor effects. Use half-normal plot to identify significant effects. 5.4 In a chemical process experiment, the effect of temperature (A), pressure ( B) and catalyst (C) on the reaction time has been studied and obtained data are given in Table 5.17. 138 Applied Design of Experiments and Taguchi Methods TABLE 5.17 Treatment combination (1) a b ab c ac bc abc (a) (b) (c) (d) Data for Problem 5.4 Design B – – + + – – + + Reaction time R1 R2 16 17 15 16 21 19 20 24 19 14 12 13 27 16 24 29 A – + – + – + – + C – – – – + + + + Estimate the factor effects. Analyse the data using Analysis of Variance and comment on the results. Develop a regression model to predict the reaction time. Perform residual analysis and comment on the model adequacy. 5.5 An experiment was conducted with four factors A, B, C and D, each at two levels. The following response data have been obtained (Table 5.18). TABLE 5.18 Treatment combination (1) a b ab c ac bc abc Response R1 R2 41 24 32 33 26 31 36 23 43 27 35 29 28 29 32 20 Data for Problem 5.5 Treatment combination d ad bd abd cd acd bcd abcd Response R1 R2 47 22 35 37 47 28 37 30 45 25 33 35 40 25 35 31 Analyse the data and draw conclusions. 5.6 A 24 factorial experiment was conducted with factors A, B, C and D, each at two levels. The data collected are given in Table 5.19. The 2k Factorial Experiments 139 TABLE 5.19 Treatment combination (1) a b ab c ac bc abc Data for Problem 5.6 Treatment combination d ad bd abd cd acd bcd abcd Response 5 20 8 19 14 16 12 18 Response 7 13 8 11 12 10 15 10 (a) Obtain a normal probability plot of effects and identify significant effects. (b) Analyse the data using ANOVA. (c) Write down a regression model. C H A P T E R 6 Blocking and Confounding in 2k Factorial Designs 6.1 INTRODUCTION We have already seen what is blocking principle. Blocking is used to improve the precision of the experiment. For example, if more than one batch of material is used as experimental unit in an experiment, we cannot really say whether the response obtained is due to the factors considered or due to the difference in the batches of material used. So, we treat batch of material as block and analyse the data. There may be another situation where one batch of material is not enough to conduct all the replications but sufficient to complete one replication. In this case, we treat each replication as a block. The runs in each block are randomized. So, blocking is a technique for dealing with controllable nuisance variables. Blocking is used when resources are not sufficient to conduct more than one replication and replication is desired. There are two cases of blocking, i.e., replicated designs and unreplicated designs. 6.2 BLOCKING IN REPLICATED DESIGNS If there are n replicates, each replicate is treated as a block. Each replicate is run in one of the blocks (time periods, batches of raw materials etc.). Runs within the blocks are randomized. ILLUSTRATION 6.1 Consider the chemical process experiment of 22 design discussed in Chapter 5 (Illustration 5.1). The two replications of this experiment are given in Table 6.1. TABLE 6.1 Treatment (1) a b ab Data from Illustration 6.1 Replications 1 40 43 59 37 140 Replications 2 37 50 54 43 Blocking and Confounding in 2k Factorial Designs 141 Suppose one batch of material is just enough to run one replicate. Each replication is run in one block. Thus, we will have two blocks for this problem as shown in Figure 6.1. From Figure 6.1 the block totals are Block 1 (B1) = 179 and Block 2 (B2 ) = 184. Block 1 (1) = 40 a = 43 b = 59 ab = 37 Block totals: B1 = 179 Block 2 (1) = 37 a = 50 b = 54 ab = 43 B2 = 184 FIGURE 6.1 Blocking a replicated design. The analysis of this design is same as that of 22 design. Here, in addition to effect sum of squares, we will have the block sum of squares. Grand total (T) = 179 + 184 = 363 Correction factor (CF) = 2 2 B1 + B2 T2 N SSBlocks = nB  CF, where nB is the number of observations in B. (363) 2 8 (6.1) = (179)2 + (184) 2 4  = 16474.25 – 16471.125 = 3.125 The analysis of variance for Illustration 6.1 is given in Table 6.2. Note that the effect sum of squares are same as in Illustration 5.1. TABLE 6.2 Source of variation Blocks Temperature Pressure Interaction Error Total Sum of squares 3.125 36.125 66.125 300.125 56.375 461.875 Chemical process experiment with two blocks Degrees of freedom 1 1 1 1 3 7 Mean square 3.125 36.125 66.125 300.125 18.79 F0 Significance at 5% — Not significant Not significant Significant 1.92 3.52 15.97 142 Applied Design of Experiments and Taguchi Methods It is to be noted that the conclusions from this experiment is same as that obtained in 22 design earlier. This is because the block effect is very small. Also, note that we are not interested in testing the block effect. 6.3 CONFOUNDING There are situations in which it may not be possible to conduct a complete replicate of a factorial design in one block where the block might be one day, one homogeneous batch of raw material etc. Confounding is a design technique for arranging a complete replication of a factorial experiment in blocks. So the block size will be smaller than the number of treatment combination in one replicate. That is, one replicate is split into different blocks. In this technique certain treatment effects are confounded (indistinguishable) with blocks. Usually, higher order interactions are confounded with blocks. When an effect is confounded with blocks, it is indistinguishable from the blocks. That is, we will not be able to say whether it is due to factor effect or block effect. Since, higher order interactions are assumed as negligible, they are usually confounded with blocks. In 2k factorial design, 2m incomplete blocks are formed, where m < k. Incomplete block means that each block does not contain all treatment combinations of a replicate. These designs results in two blocks, four blocks, eight blocks and so on as given in Table 6.3. TABLE 6.3 k 2 3 4 2k 4 8 16 Number of blocks possible in 2k factorial design m (possible value) 1 1 2 1 2 3 No. of blocks 2m 2 2 4 2 4 8 No. of treatments in each block 2 4 2 8 4 2 When the number of factors are large (screening experiments), there may be a constraint on the resources required to conduct even one complete replication. In such cases, confounding designs are useful. 6.4 THE 2k FACTORIAL DESIGN IN TWO BLOCKS Consider a 22 factorial design with factors A and B. A single replicate of this design has 4 treatment combinations [(1), a, b, ab]. Suppose to conduct this experiment certain chemical is required and available in small batch quantities. If each batch of this chemical is just sufficient to run only two treatment combinations, two batches of chemical is required to complete one replication. Thus, the two batches of chemical will be the two blocks and we assign two of the four treatments to each block. Blocking and Confounding in 2k Factorial Designs 143 6.4.1 Assignment of Treatments to Blocks Using Plus–Minus Signs The plus-minus table (Table 5.2) used to develop contrasts can be used to determine which of the treatments to be assigned to the two blocks. For our convenience Table 5.2 is reproduced as Table 6.4. TABLE 6.4 Treatment combination (1) a b ab Plus–minus signs for 22 design Factorial effect A B – + – + – – + + I + + + + AB + – – + Now we have to select the effect to be confounded with the blocks. As already pointed out, we confound the highest order interaction with the blocks. In this case, it is the AB interaction. In the plus–minus table above, under the effect AB, the treatments corresponding to the +ve sign are (1) and ab. These two treatments are assigned to one block and the treatments corresponding to the –ve sign (a and b) are assigned to the second block. The design is given in Figure 6.2. Block 1 (1) ab Block 2 a b FIGURE 6.2 A 22 design in two blocks with AB confounded. The treatments within the block are selected randomly for experimentation. Also which block to run first is randomly selected. The block effect is given by the difference in the two block totals. This is given below: Block effect = [(1) + ab ] – [a + b] = (1) + ab – a – b (6.2) Note that the block effect (1) + ab – a – b is equal to the contrast used to estimate the AB interaction effect. This indicates that the interaction effect is indistinguishable from the block effect. That is, AB is confounded with the blocks. From this it is evident that the plus–minus table can be used to create two blocks for any 2k factorial designs. This concept can be extended to confound any effect with the blocks. But the usual practice is to confound the highest order interaction with the blocks. As another example consider a 23 design to run in two blocks. Let the three factors be A, B and C. Suppose, we want to confound ABC interaction with the blocks. From the signs of plus–minus given in Table 6.5 (same as Table 5.3), we assign all the treatments corresponding to +ve sign under the effect ABC to block 1 and the others to block 2. The resulting design is shown in Figure 6.3. The treatment combinations within each block are run in a random manner. 5 Treatment combination (1) a b ab c ac bc abc Plus–minus signs for 23 design Factorial effect AB C + – – + + – – + Block 2 (1) ab ac bc I + + + + + + + + A – + – + – + – + B – – + + – – + + AC + – + – – + – + BC + + – – – – + + ABC – + + – + – – + – – – – + + + + Block 1 a b c abc FIGURE 6. Thus. 6. we will have l1 = l2 = l3 = 1.3) . X2 ® B and X3 ® C. Suppose in a 23 design. Since the possible value of L (mod 2) are 0 or 1. li = exponent on the ith factor in the effect to be confounded. Since all the three factors appear in the confounded effect. And the defining contrast corresponding to ABC is L = X1 + X 2 + X3 On the other hand. In 2k system we will have li = 0 or 1 and also Xi = 0 (low level) or Xi = 1 (high level).144 Applied Design of Experiments and Taguchi Methods TABLE 6. Let X1 ® A. This procedure is explained as follows: (6..2 Assignment of Treatments Using Defining Contrast This method makes use of a defining contrast L = l1X1 + l2X2 + l3X3 + l4X4 + .. And the defining contrast corresponding to AC would be L = X1 + X3 We assign the treatments that produce the same value of L (mod 2) to the same block. we assign all treatments with L (mod 2) = 0 to one block and all treatments with L (mod 2) = 1 to the second block.4. if we want to confound AC with the blocks.3 A 23 design in two blocks with ABC confounded. + lKXK where Xi = level of i th factor appearing in a particular treatment combination. ABC is confounded with the blocks. all the treatments will be assigned to two blocks. we will have l1 = l3 = 1 and l2 = 0. When replication is possible in the confounding design either complete confounding or partial confounding is used. the data are analysed as in single replication case of any other factorial design. we will not be able to obtain the information about the confounded effect. 6. c.Blocking and Confounding in 2k Factorial Designs 145 Suppose we want to design a confounding scheme for a 23 factorial in two blocks with ABC confounded with the blocks. The resulting design is shown in Figure 6. we have to find L (mod 2) for all the treatments.4. This assignment is same as the design (Figure 6. For example.3 Data Analysis from Confounding Designs Analysing data through ANOVA require experimental error. 1) notation. ab. replication of the experiment is necessary. abc are assigned to block 2.4) Thus (1).6. b. Thus in this design. The defining contrast for this case is L = X1 + X2 + X3 Using (0. So.4 and the ANOVA is given in Table 6. Replicate 1 Block 1 a b c abc Block 2 (1) ab ac bc Replicate 2 Block 1 a b c abc Block 2 (1) ab ac bc Replicate 3 Block 1 a b c abc Block 2 (1) ab ac bc FIGURE 6. (1) a b ab c ac bc abc : : : : : : : : L L L L L L L L = = = = = = = = 1(0) 1(1) 1(0) 1(1) 1(0) 1(1) 1(0) 1(1) + + + + + + + + 1(0) 1(0) 1(1) 1(1) 1(0) 1(0) 1(1) 1(1) + + + + + + + + 1(0) 1(0) 1(0) 1(0) 1(1) 1(1) 1(1) 1(1) = = = = = = = = 0 1 1 2 1 2 2 3 = = = = = = = = 0 1 1 0 1 0 0 1 (mod (mod (mod (mod (mod (mod (mod (mod 2) 2) 2) 2) 2) 2) 2) 2) (6. 6.5. ac. bc are assigned to block 1 and a. If replication is not possible due to constraint on resources. .5 COMPLETE CONFOUNDING Confounding the same effect in each replication is called completely/fully confounded design. consider a 23 design to be run in two blocks with ABC confounded with the blocks and the experimenter wants to take three replications.4 A 23 design with ABC confounded in three replicates. significant effects are identified using normal probability plot of effects and the error sum of squares is obtained by pooling the sum of squares of insignificant effects.3) obtained using plus–minus signs of Table 6. That is. 6 PARTIAL CONFOUNDING In complete confounding.5.7 gives the analysis of variance for the partially confounded 23 design with three replicates shown in Figure 6. AB is confounded in Replication 2 and AC is confounded in Replication 3. In this design we can test all the effects. There are three replicates. the information on the confounded effect cannot be retrieved because the same effect is confounded in all the replicates (Figure 6. For example. Interaction ABC is confounded in Replication 1.146 Applied Design of Experiments and Taguchi Methods TABLE 6. Such design is called partial confounding. information about AB from Replicates 1 and 3 and information about AC from Replicates 1 and 2.6 Analysis of variance for the 23 design with ABC confounded in three replicates Source of variation Replicate Blocks (ABC) within replicate A B C AB AC BC Error Total Degrees of freedom 2 3 1 1 1 1 1 1 12 23 6.4). We can obtain information about ABC from the data in Replicates 2 and 3. Table 6. Whenever replication of a confounded design is possible.5. consider the design shown in Figure 6.7 ANOVA for the partially confounded 23 design with three replicates Degree of freedom 2 3 1 1 1 1 1 1 1 11 23 Source of variation Replicates Blocks within replicate (ABC in Replicate 1 + AB in Replicate 2 + AC in Replicate 3) A B C AB (from Replicates 1 and 3) AC (from Replicates 1 and 2) BC ABC (from Replicates 2 and 3) Error Total . different effect is confounded in different replications. instead of confounding the same effect in each replication. TABLE 6. Thus in this design we say that 2/3 of the relative information about confounded effects can be retrieved. Each factor was studied at two levels. pressure (B) and stirring rate (C) on the yield of a chemical process. ILLUSTRATION 6. TABLE 6. AB . and BC are computed using data from both the replications.2 b 5 ab 11 c –6 ac –14 bc 24 abc 6 The sum of squares can be computed using the contrasts.5 Partial confounding of the 2 design. SSAC is computed from Replicate 1 only. The sum of squares of A . B.8. These can be derived from Table 5.Blocking and Confounding in 2k Factorial Designs Replicate 1 Block 1 a b c abc Block 2 (1) ab ac bc Replicate 2 Block 1 (1) ab c abc Block 2 a b ac bc Replicate 3 Block 1 (1) b ac abc Block 2 a ab c bc 147 ABC confounded AB confounded 3 AC counfounded FIGURE 6.6 Data for Illustration 6.2 Partial Confounding A study was conducted to determine the effect of temperature (A). Since ABC is confounded in Replicate 1.8 Treatment Total (1) –33 a –19 Response totals for Illustration 6. The design and the coded data are shown in Figure 6. C. each replicate of the 23 design was run in two blocks. And the sum of squares of replication (SSRep) is computed using the replication totals. The response totals for all the treatments are given in Table 6.3.2. The two replicates were run with ABC confounded in Replicate 1 and AC confounded in Replicate 2. Replicate 1 ( R1) ABC confounded Block 1 a b c abc = = = = –8 –5 –4 –1 Block 2 (1) = –18 ab = 5 ac = –10 bc = 10 Replicate 2 ( R 2) AC confounded Block 1 (1) = –15 b = 10 ac = –4 abc = 7 R2 = 5 Block 2 a = –11 ab = 6 c = –2 bc = 14 R 1 = –31 FIGURE 6. . Similarly. SSABC is computed from Replicate 2 only.6. As each batch of raw material was just enough to test four treatment combinations. 25 SS C = [  (1)  a  b  ab + c + ac + bc + abc ]2 16 [  (  33)  (  19)  5  11 + (  6) + (  14) + 24 + 6]2 16 (46) 2 16 = = = 132.25 SSBC = [(1) + a  b  ab  c  ac + bc + abc]2 (n * 2k ) [  33  19  5  11  (  6)  (  14) + 24 + 6]2 16 = = (  18)2 16 = 20.25 .25 = SSB = [  (1)  a + b + ab  c  ac + bc + abc ]2 (n * 2 k ) [  (  33)  (  19) + 5 + 11  (  6)  (  14) + 24 + 6]2 16 (118) 2 16 = = = 870.148 Applied Design of Experiments and Taguchi Methods SSA = [  (1) + a  b + ab  c + ac  bc + abc ]2 (n 2 k ) = [  (  33) + (  19)  5 + 11  (  6) + (  14)  24 + 6]2 [2 * 8] (  6)2 16 = 2.25 SSAB = [(1)  a  b + ab + c  ac  bc + abc]2 (n * 2 k ) [  33  (99)  5 + 11  6  (  14)  24 + 6]2 16 (  18) 2 16 = = = 20. 125 = = The replication sum of square is computed from the replication totals (R1 = –31 and R2 = 5) Grand total (T) = –26 and N = 16.125 Therefore. SSABC = [ (1) + a + b  ab + c  ac  bc + abc]2 (n * 2k ) = [  (  15) + (  11) + 10  6 + (  2)  (  4)  14 + 7]2 (1 * 8) (3) 2 8 = = 1.25 .25 = 81.125 SSAC is computed using data from Replicate 1 only. It is found that SSABC {from Replicate 1} = 3.125 + 10.5) where nRep is the number of observation in the replication SSRep = (  31)2 + (5)2 8  (  26)2 16 = 123.125 = 13. SSBlock = 3. CF = T2 N 2 2 R1 + R2 SSRep = nRep  CF (6. SSAC = [(1)  a + b  ab  c + ac  bc + abc ]2 n * 2k [  18  (  8)  5  5  (  4)  10  10  1]2 1*8 (  37)2 8 = 171.Blocking and Confounding in 2k Factorial Designs 149 SSABC is computed using data from Replicate 2 only.25 – 42.125 SSAC {from Replicate 2} = 10.00 The block sum of square is equal to the sum of SSABC from Replicate 1 and SSAC from Replicate 2. We select two effects to be confounded with the blocks.11 17. 1) to the second block.125 20.6) (  26) 2 16 = [(  8) 2 + (  5) 2 + .125 48.25 132.60 – – 0.9.2 is given in Table 6. 0) to the third block and (1. Each contrast L1 (mod 2) and L2 (mod 2) will yield a specific pair of values for each treatment combination.125 9.25 171.25 20. We obtain 2k–2 observations in each of these four blocks.25 2. 1) or (1. In (0.00 13. .125 20. 1) to the fourth block. We construct these designs using the method of defining contrast.25 870.25 171.25 132.25 870.11 .25 1.25 = 1359.00 The analysis of variance for Illustration 6. 0) are assigned to first block. TABLE 6.75 Mean square – – 2.83 2. 0) or (0. Hence we will have two defining contrasts L1 and L2.75 SSe = SSTotal – S SS of all the effects + SSRep + SSBlock = 1359. all treatments with (0..7 CONFOUNDING 2k DESIGN IN FOUR BLOCKS Confounding designs in four blocks are preferred when k > = 4. (0.75 = 48. + (  2) 2 + (14) 2 ]  = 1402 – 42.78 2.00 1359. L2) will produce either (0.25 1. 1).12 Not significant Significant Significant Not significant Significant Not significant Not significant F0 Significance at a = 5% From ANOVA it is found that the main effects B and C and the interaction AC are significant and influence the process yield.2) Sum of squares 1 2 1 1 1 1 1 1 1 5 15 Degrees of freedom 81. 6.25 20.75 – 1311. Thus.9 Source of variation Replicates Block within replications A B C AB AC (from R1 ) BC ABC (from R 2) Error Total ANOVA for 23 design with partial confounding (Illustration 6. The treatment combinations producing the same values of (L1. (1. 0) or (1.150 Applied Design of Experiments and Taguchi Methods 2 SSTotal =     Yijkl  CF i j k l (6. L2) are assigned to the same block. 1) notation (L1.23 90.65 13.. D ® X 4 L 1 = X2 + X3 L 2 = X1 + X4 All the treatments are evaluated using L1. Suppose we want to confound BC and AD with the blocks. (1) : L1 L2 a : L1 L2 = = = = 1(0) + 1(0) + 0(mod 1(1) + 1(0) = 0 (mod 2 = 0) 1(0) = 0 (mod 2 = 0) 2 = 0) since X 1 does not appear in L1 0 = 1 (mod 2 = 1) (6.7. B ® X2 .10 Treatment combination (1) a b ab c ac bc abc d ad bd abd cd acd bcd abcd Assignment of treatments of 24 design to four blocks L1 (mod 2) 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 L2 (mod 2) 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 Assignment Block Block Block Block Block Block Block Block Block Block Block Block Block Block Block Block 1 3 2 4 2 4 1 3 3 1 4 2 4 2 3 1 The 24 design in four blocks is shown in Figure 6. Let A ® X1. L2 as done in Section 6.10 along with the assignment of blocks.Blocking and Confounding in 2k Factorial Designs 151 ILLUSTRATION 6. These effects will have two defining contrasts.7) (6. TABLE 6.4.3 Design of a 24 Factorial Experiment in Four Blocks To obtain four blocks we select two effects to be confounded with the blocks. C ® X3. These are given in Table 6.8) Similarly we can obtain L1 (mod 2) and L2 (mod 2) values for other treatments.2. . Also when selecting the m independent effects for confounding. Lm associated with the confounded effects. Step 2: Construct m defining contrasts L1. The data obtained are given in Table 6. Independent effect means that the effect selected for confounding should not become the generalized interaction of other effects.8 CONFOUNDING 2k FACTORIAL DESIGN IN 2 m BLOCKS Confounding 2k factorial design method discussed in Section 6. PROBLEMS 6. Thus. We will have 2m – m – 1 generalized interactions from the m independent effects selected for confounding which will also be confounded with the blocks. the resulting design should not confound the effects that are of interest to us.1 A study was conducted using a 23 factorial design with factors A. 6. ….7 The 24 factorial design in four blocks with BC and AD confounded. each containing 2k–m treatments. This is equal to the product of BC and AD mod 2. Note that there are 4 blocks with 3 degrees of freedom between them. The effects to be selected for generating the blocks is available in Montgomery (2003). Procedure: Step 1: Select m independent effects to be confounded with the blocks. The statistical analysis of these designs is simple.7 can be extended to design 2k factorial design in 2m blocks (m < k ). So. Then the block sum of squares is found by adding the sum of squares of all the confounded effects. B and C. But only two effects (BC and AD ) are confounded with the blocks and has one degree of freedom each.152 Applied Design of Experiments and Taguchi Methods Block 1 L1 = 0 L2 = 0 (1) bc ad abcd Block 2 L1 = 1 L2 = 0 b c abd acd Block 3 L1 = 0 L2 = 1 a d abc bcd Block 4 L1 = 1 L2 = 1 ab ac bd cd FIGURE 6. This will create 2m blocks. L2. where each block contains 2k–m treatments. . BC * AD = ABCD will also be confounded with the blocks. The sum of squares are computed as though no blocking is involved. one more degree of freedom should be confounded.10. This should be the generalized interaction between the two effects BC and AD. Analyse the data. Suppose ABC is confounded in replicate 1 and BC is confounded in replicate 2. Analyse this experiment assuming each replicate as a block. Suppose ABC is confounded in each replicate. 6.1.5 Consider the data in Problem 5.6. Analyse the data. Construct a design with four blocks confounding ABC and ABD with the blocks. 6.8 Consider the data from the second replication of Problem 5.2.1. Analyse the data. Analyse the data.2 Consider the experiment described in Problem 5. .3 Consider the first replicate (R1 ) of Problem 6. Construct a design with four blocks confounding ABC and ACD with the blocks. construct a design with two blocks of eight observations each confounding ABCD.5.5. Design an experiment to run these observations in two blocks with ABC confounded. 6. 6. Analyse the data and draw conclusions.1.4 Consider the data in Problem 5. Using replicate 2. Suppose that this replicate could not be run in one day. 6.7 Consider the data in Problem 6. 6. Analyse the data.Blocking and Confounding in 2k Factorial Designs 153 TABLE 6.6 Consider Problem 6.1 Response R1 15 17 34 22 18 5 3 12 R2 12 23 29 32 25 6 2 18 Analyse the data assuming that each replicate (R1 and R 2 ) as a block of one day. 6.10 Treatment combination (1) a b ab c ac bc abc Data for Problem 6. Out of the 31 degrees of freedom only 5 degrees of freedom are associated with the main factors and 10 degrees of freedom account for two-factor interactions. Since this design consists of 23–1 = 4 treatment combinations. One complete replication of this design consists of 8 runs.2 THE ONE-HALF FRACTION OF THE 2k DESIGN For running a fractional factorial replication. called 23–1 design. For example. Then detailed experiments are conducted on these identified factors. B and C. where 3 denotes the number of factors and 2–1 = 1/2 denotes the fraction. The confounded effect is called the generator of this fraction and sometimes it is referred to as a word. Suppose the experimenter cannot run all the 8 runs and can conduct only 4 runs. the design is called a fractional replication or fractional factorial. one complete replicate of 25 designs would require 32 runs. Consider a 23 design with three factors A. When only a fraction of a complete replication is run. These fractional factorial designs are widely used in the industrial research to conduct screening experiments to identify factors which have large effects. 154 . a one-half fraction of 23 design. Even though these experiments have certain disadvantages. 7. the number of experiments would be very high.C H A P T E R 7 Two-level Fractional Factorial Designs 7. Table 7. If the experimenter assumes that certain higher order interactions are negligible. As the number of factors increases. As already pointed out. This forms the one-half fraction of 2k design. a confounding scheme is used to run in two blocks and only one of the blocks is run. even in 2k factorial designs.1 is a plus–minus table for the 23 design with the treatments rearranged. we confound the highest order interaction with the blocks. The rest are associated with threefactor and other higher order interactions. This becomes a one-half fraction of a 23 design. benefits are economies of time and other resources.1 INTRODUCTION During process of product development one may have a large number of factors for investigation. information on the main effects and lower order interactions can be obtained by conducting only a fraction of full factorial experiment. the identity element I is always +ve.2). ABC is called the generator of this fraction. The resulting design is the one-half fraction of the 23 design and is called 23–1 design (Table 7. we find that lA = lBC lB = lAC lC = l AB . the two factors interactions are estimated from lAB = 1/2(–a – b + c + abc) lAC = 1/2(–a + b – c + abc) lBC = 1/2(a – b – c + abc) From the above. Since the fraction is formed by selecting the treatments with +ve sign under ABC effect. the defining relation for the design TABLE 7.1 Treatment combinations a b c abc ab ac bc (1) Plus–minus table for 23 design Factorial effect AB C – – + + + – – + – – + + – + + – I + + + + + + + + A + – – + + + – – B – + – + + – + – AC – + – + – + – + BC + – – + – – + + ABC + + + + – – – – Suppose we select the treatment combinations corresponding to the + sign under ABC as the one-half fraction. shown in the upper half of Table 7. Further.1. B and C are lA = 1/2(a – b – c + abc) lB = 1/2(–a + b – c + abc) lC = 1/2(–a – b + c + abc) Similarly.Two-level Fractional Factorial Designs 155 TABLE 7.2 Run A 1 2 3 4 + – – + One-half fraction of 23 design Factor B – + – + C – – + + The linear combination of observations used to estimate the effects of A. so we call I = +ABC. we can obtain the de-aliased estimates of all effects by analyzing the eight runs as a 23 design in two blocks. So. I = A . we can find the aliases for B and C. in fact. it is not possible to differentiate whether the effect is due to A or BC and so on. The defining relation for this design would be I = –ABC The alias structure for this fraction would be l¢A ® A – BC l¢B ® B – AC l¢C ® C – AB So.156 Applied Design of Experiments and Taguchi Methods Thus. B – AC and C – AB. when we estimate A. B and AC. This one-half fraction with I = +ABC is called the principle fraction. B is confounded with AC and C is confounded with AB . any one fraction can be used. Suppose. ABC = A2BC Since the square of any column (A ´ A) is equal to the identity column I. we say that each is an alias of the other. This can also be done by adding and subtracting the respective linear combinations of effects from the two fractions as follows: 1/2(lA + l¢A) = 1/2(A + BC + A – BC) ® A 1/2(lB + l ¢B) = 1/2(B + AC + B – AC) ® B 1/2(lC + l¢C) = 1/2(C + AB + C – AB) ® C 1/2(lA – l¢A) = 1/2(A + BC – A + BC) ® BC 1/2(lB – l¢B) = 1/2(B + AC – B + AC) ® AC 1/2(lC – l¢C) = 1/2(C + AB – C + AB) ® AB . A . B and C with this fraction we are. This is denoted by lA ® A + BC lB ® B + AC lC ® C + AB This is the alias structure for this design. we get A = BC Similarly. That is. This will be a complementary one-half fraction. I = ABC. A is confounded with BC. Suppose we want to find the alias for the effect A. Multiplying the defining relation by any effect yields the alias of the effect. we select the treatments corresponding to –ve sign under ABC (Table 7. The alias structure can be determined by using the defining relation of the design. A and BC. we are getting confounding effects.1) for forming the fraction. When two effects are confounded. Aliasing of effects is the price one must bear in these designs. estimating A – BC. If we run both fractions. Both fractions put together form the 23 design. In practice. and C and AB aliases. the defining relation is I = ABC . but two-factor interactions are aliased with three-factor interactions. the resolution of 3–1 design. In fractional factorial design the effects are aliased with other effects. For example. the less restrictive shall be the assumption about the interactions aliased to be negligible. it is of resolution IV design. And two-factor interactions may be aliased with each other. On the other hand. The higher the resolution.Two-level Fractional Factorial Designs 157 7. Add the kth factor equal to the highest order interaction in the 2k–1 design and fill the plus–minus signs. Usually. The characteristics of these designs are discussed as follows: Resolution III designs: A resolution III design does not have any main effect aliased with any other main effect. So. This will be the 2k–1 fractional factorial design with the highest resolution. if a main effect is aliased with a two-factor interaction. And write down the resulting treatments for the fractional design. if I = ABCD. Its effect estimate can be said to be more accurate since a four-factor interaction can be assumed as negligible. The design resolution deals with these aspects. In full factorial design all the effects can be independently estimated. For example: A 23–1 design with I = ABC Resolution IV designs: A resolution IV design does not have any main effect aliased with each other or two-factor interactions. For example: A 25–1 design with I = ABCDE 7. IV and V are considered as important. ABC is called the word in the defining relation. For the preceding 23–1 design. Thus. the estimate of main effect may not be accurate since a two-factor interaction may also exist. The resolution of a two-level fractional factorial design is equal to the number of letters present in the smallest word in the defining relation. Suppose a main effect is aliased with a four-factor interaction. For example: A 24–1 design with I = ABCD Resolution V designs: In resolution V designs no main effect or two-factor interaction is aliased with any other main effect or two-factor interaction. Each row of this is one run (experiment). Suppose I = BDE = ACE = ABCD. it is of resolution III design (appears a three-letter smallest word). We prefer to use highest resolution designs.4 CONSTRUCTION OF ONE-HALF FRACTION WITH HIGHEST RESOLUTION The following steps discuss the construction of one-half fraction with highest resolution: Step 1: Step 2: Write down the treatment combinations for a full 2k–1 factorial with plus–minus signs.3 DESIGN RESOLUTION The level of confounding of an experiment is called its resolution. but are aliased with two-factor interactions. Some two-factor interactions are aliased with other two-factor interactions. . The designs with resolution III. design resolution is an indicator of the accuracy of the design in terms of the effect estimates. we represent the design 23–1 design is Resolution III and is denoted as 2III resolution by a roman numeral. Speed (A). I = –ABC C = – AB – + + + Treatment (1) ac bc abc Note that these two fractions are the two blocks of a 23 design. Step 1: The treatments in 23–1 full factorial (22 design) are (1).3 23–1 full factorial A (1) a b ab – + – + B – – + + Replace the first column by runs. Feed (B).158 Applied Design of Experiments and Taguchi Methods 3–1 Suppose we want to obtain a 2III fractional factorial design. .4) TABLE 7. TABLE 7. Four factors. Depth of cut (C) and Type of coolant (D) are studied each at two levels. The coded data of the response (surface finish) obtained is given in Table 7.3).1 The surface finish of a machined component is being studied. ILLUSTRATION 7. a.5) TABLE 7. This is the required design which is a 2III Similarly we can obtain the other fraction with I = –ABC (Table 7. ab (Table 7. Step 2: Add the kth factor C equal to the highest order interaction in 2k–1 design (AB interaction) (Table 7. b.5 Run 1 2 3 4 A – + – + B – – + + 3–1 2III .4 Run 1 2 3 4 A – + – + B – – + + 2k–1 design C = AB + – – + Treatment c a b abc 3–1 design with I = ABC.6. confounding ABC with the blocks. 7 Run 1 2 3 4 5 6 7 8 A – + – + – + – + The one-half fraction of the 24–1 design with data B – – + + – – + + C – – – – + + + + D = ABC – + + – + – – + Treatment (1) ad bd ab cd ac bc abcd Response –18 –2 5 8 –12 2 1 10 The defining relation for the design is I = ABCD.7. TABLE 7. The response data (Table 7. The construction of one-half fraction of the 24–1 design is given in Table 7.1 A2 B2 C3 –9 5 C4 1 1 C1 –6 –2 B1 C2 2 9 C3 8 8 B2 C4 12 10 The construction of a one-half fraction of the 24–1 design with I = ABCD. and the data analysis is discussed below. its aliases.7.6) of the treatments is also presented in Table 7.6 A1 B1 C1 D1 D2 –18 –11 C2 –17 –12 Data for Illustration 7. The aliases are as follows: lA ® A + BCD lB ® B + ACD lC ® C + ABD lD ® D + ABC lAB ® AB + CD lAC ® AC + BD l AD ® AD + BC The linear combination of observations (contrasts) used to estimate the effects and computation of sum of squares are as follows: CA = –(1) + ad – bd + ab – cd + ac – bc + abcd = –(–18) – 2 – 5 + 8 – (–12) + 2 – 1 + 10 = 42 CB = –(1) – ad + bd + ab – cd – ac + bc + abcd = –(–18) – (–2) + 5 + 8 – (–12) – 2 + 1 + 10 = 54 .Two-level Fractional Factorial Designs 159 TABLE 7. the sum of squares of the main effect C and D and the two-factor interactions are pooled into the error term.0 lD = 2.1 Sum of squares SSA = 220.50 SSAC = 2.1 and 7.50 SSC = 8.00 SSAB = 40.2).8.00 SSD = 8.50 SSB = 364. From these two plots of effects (Figures 7.00 Alias structure lA ® A + BCD lB ® B + ACD lC ® C + ABD lD ® D + ABC l AB ® AB + CD l AC ® AC + BD l AD ® AD + BC Estimate of effect lA = 10. .5 lC = 2.0 lAB = –4. where C is the contrast Sum of squares of a factor/interaction = C2 n2 k where k = 4 and n = 1/2 (one half of the replication) The estimate of effects and sum of squares of all factors and interactions are summarized in Table 7.1 and 7.0 lAD = –3. Hence.5 l AC = 1.00 SSAD = 18. TABLE 7.160 Applied Design of Experiments and Taguchi Methods CC = –(1) – ad – bd – ab + cd + ac + bc + abcd = –(–18) – (–2) – 5 – 8 – 12 + 2 + 1 + 10 = 8 CD = –(1) + ad + bd – ab + cd – ac – bc + abcd = –(–18) – 2 + 5 – 8 – 12 + – 2 – 1 + 10 = 8 CAB = +(1) – ad – bd + ab + cd – ac – bc + abcd = –18 – (–2) – 5 + 8 – 12 – 2 – 1 + 10 = –18 CAC = +(1) – ad + bd – ab – cd + ac – bc + abcd = –18 – (–2)+ 5 – 8 – (–12) + 2 – 1 + 10 = 4 CAD = +(1) + ad – bd – ab – cd – ac + bc + abcd = –18 – 2 – 5 – 8 – (–12) – 2 + 1 + 10 = –12 Effect of factor/interaction = C n 2 k 1 .2 respectively.0 The normal plot and the half normal plot of the effects for Illustration 7.1 are shown in Figures 7.9. it is evident that the main effects A and B alone are significant.8 Effects and sum of squares and alias structure for Illustration 7.5 lB = 13. The analysis of variance is given in Table 7. 1.Two-level Fractional Factorial Designs 161 B A FIGURE 7. .1.1 Normal probability plot of the effects for Illustration 7.2 Half-normal plot of the effects for Illustration 7. FIGURE 7. 50 15. Then. Suppose k = 5(A.50 From the analysis of variance. C.5 THE ONE-QUARTER FRACTION OF 2k DESIGN When the number of factors to be investigated are large.162 Applied Design of Experiments and Taguchi Methods TABLE 7. The generalized interaction of ABD and BCE is ACDE. 7.10.9 Source of variation A B Pooled error Total F0.50 364.1 Degrees of freedom 1 1 5 7 Mean square 220. TABLE 7. it is found that only the main factors speed (A) and feed (B) influence the surface finish. The design will have two generators and their generalized interaction in the defining relation.50 661. Construction of 2k–2 design: The following steps discuss the constructions of 2k–2 design: Step 1: Write down the runs of a full factorial design with k –2 factors. And the design resolution is III. D and E).5 = 6. Step 2: Have two additional columns for other factors with appropriately chosen interactions involving the first k–2 factors and fill the plus–minus signs. The 25–2 fractional factorial design is shown in Table 7. B. . So the complete defining relation for the design is I = ABD = BCE = ACDE. This design will contain 2k–2 runs and is called 2k–2 fractional design.05. one-quarter fraction factorial designs are used.41 23.1.3 F0 14.61 Analysis of variance for Illustration 7.10 Run 1 2 3 4 5 6 7 8 A – + – + – + – + B – – + + – – + + The 25–2 fractional factorial design C – – – – + + + + D = AB + – – + + – – + E = BC + + – – – – + + Treatment de ae b abd cd ac bce abcde The generators for this design (25–2) are I = ABD and I = BCE. And write down the resulting treatments which is the required 2k–2 fractional factorial design.50 364.82 Sum of squares 220.50 76. k – 2 = 3. For large value of k. These designs can be constructed by the methods discussed in the preceding sections of this chapter.10 is given in Table 7. A design up to 3 factors with 4 runs is a 2III design.11. 7.6 THE 2k–m FRACTIONAL FACTORIAL DESIGN A 2k–m fractional factorial containing 2k–m runs is a 1/2m fraction of the 2k design or it is a 2k–m fractional factorial design. These designs are called saturated designs (k = n – 1) since all 3–1 columns will be assigned with factors. 7. we assume that higher order interactions (3 or more factors) are negligible. E = ± AC or E = ± ABC The alias structure for the design given in Table 7. where n is a multiple of 4. TABLE 7. The generators are selected such that the interested effects are not aliased with each other. E = – BC. etc. Studying 7 factors in 8 runs is a 2III design (1/16 fraction of 27 design). The defining relation for this design contains m generators selected initially and their 2m–m–1 generalized interactions. Generally we select the generator such that the resulting 2k–m design has the highest possible resolution. 16 runs (up to 15 factors).Two-level Fractional Factorial Designs 163 Note that. 3 7–4 which is one-half fraction of 2 design. . Each effect has 2m–1 aliases. other fractions for this design can be obtained by changing the design generators like D = – AB.4.11 The alias structure for the 25–2 design (I=ABD=BCE=ACDE) A = BD = ABCE = CDE B = AD = CE = ABCDE C = ABCD = BE = ADE D = AB = BCDE = ACE E = ABDE = BC = ACD AC = BCD = ABE = DE AE = BDE = ABC = CD The analysis of data from these designs is similar to that of 2k–1 design discussed in Section 7. 8 runs (up to 7 factors).7 FRACTIONAL DESIGNS WITH SPECIFIED NUMBER OF RUNS We can construct resolution III designs for studying up to k = n – 1 factors in n runs. These designs are frequently used in the industrial experimentation. Guidelines are available for selecting the generator for the 2k–m fractional designs (Montgomery 2003). This simplifies the alias structure. Thus we can have design with 4 runs (up to 3 factors). The alias structure is obtained by multiplying each effect by the defining relation. we can isolate this main effect and its two-factor interaction. The example for a fold-over design is given in Table 7. Even if we change the signs of any one generator and add that fraction to another fraction. (to be conducted). When we fold over a resolution III design. This we do not know. the resulting design is also a foldover design. To a principal fraction if a second fraction is added with the signs of a main factor reversed. if the signs on the generators that have an odd number of letters is changed.164 Applied Design of Experiments and Taguchi Methods 7. we call such a design as fold-over design. To this design. Alternatively. the aliased main effect may not be significant and the interaction may really affect the response.8 FOLD-OVER DESIGNS In resolution III designs. a fold-over design 3–1 is recommended.12. we obtain a full fold-over design. If we add to a resolution III fractional design a second fraction in which the signs of all factors are reserved. In order to know this. since it is aliased with a main effect. if we add the other fraction with –ABC as the generator.12 Original fraction (F1 ) Run 1 2 3 4 5 6 7 8 A – + – + – + – + B – – + + – – + + C – – – – + + + + D = AB + – – + + – – + E = AC + – + – – + – + F = BC + + – – – – + + G = ABC – + + – + – – + Treatment def afg beg abd cdg ace bcf abcdefg 7–4 Full fold-over design for 2III fractional design Fold-over fraction (F2 ) Run 1¢ 2¢ 3¢ 4¢ 5¢ 6¢ 7¢ 8¢ –A + – + – + – + – –B + + – – + + – – –C + + + + – – – – –D = AB – + + – – + + – –E = AC – + – + + – + – –F = BC – – + + + + – – –G = –ABC + – – + – + + – Treatment abcg bcde acdf cefg abef bdfg adeg (1) . TABLE 7. to utilize the data collected in the experiment already conducted along with a second experimental data. Suppose we have conducted an experiment using 2III design with +ABC as the generator. One way is to design a separate experiment (full factorial or higher resolution design) and analyse. further detailed experimentation is needed. The objective is to isolate the confounded effects in which we are interested. the main effects are aliased with two-factor interactions. In some experiments. resulting design is called a full fold-over or a reflection design. TABLE 7. (7. I = ACE. The alias structure for the original fraction and its fold over can be simplified by assuming that the three-factor and higher order interactions as negligible. neglecting higher order interactions. The remaining seven words with even number of letters will be same as in the defining relation of F1. The aliases associated with the main effects and two factor interactions are given in Table 7. three at a time and all the four. I = –ACE. I = ABCG (7. ACE. I = ABCG I = –ABD.3). The block effect can be obtained as the difference between the average response of the two fractions.3) This consists eight words of odd length (bold faced) and seven words of even length. BCF and ABCG together two at a time. The complete defining relation for the original fraction (F1) can be obtained by multiplying the four generators ABD.4) The fold-over design denoted by F consists of F1 and F2.2) Note that the sign is changed only for the odd number of letter words of the generators in F1 to obtain the fold-over design.13. all main effects are not aliased. I = –BCF. The effects from the first fraction F1(l) and the second fraction F2(l¢) are estimated separately and are combined as 1/2(l + l¢) and 1/2(l – l ¢) to obtain de-aliased effect estimates. for the original fraction. I = BCF. Thus. Since all the defining words of odd length have both +ve and –ve sign in F do not impose any constraint on the factors and hence do not appear in the defining relation of F.13 The alias structure for the original fraction (F1) lA ® A + BD + CE + FG lB ® B + AD + CF + EG lC ® C + AE + BF + DG lD ® D + AB + CG + EF lE ® E + AC + BG + DF lF ® F + BC + AG + DE lG ® G + CD + BE + AF . the defining relation is I = ABD = ACE = BCF = ABCG = BCDE = ACDF = CDG = ABEF = BEG = AFG = DEF = ADEG = CEFG = BDFG = ABCDEFG (7. the fold-over design (F) is a 2IV design. So the complete defining relation for F2 is I = –ABD = –ACE = –BCF = ABCG = BCDE = ACDF = –CDG = ABEF = –BEG = –AFG = –DEF = ADEG = CEFG = BDFG = –ABCDEFG (7. The two fractions can be treated as two blocks with 8 runs each. These are obtained from Eq. That is. Since it is a resolution IV design. The generators for the original fraction (F1) and the fold-over fraction (F2 ) are as follows: For F1: For F2: I = ABD.Two-level Fractional Factorial Designs 165 Note that we have to run the two fractions separately. The defining relation for the fold-over fraction (F2) can be obtained by changing the sign of words with odd number of letters in the defining relation of the original fraction ( F1). (7.1) (7. The words of even length which are same in F1 and F2 are the defining words of the defining relation of F which is ABCG = BCDE = ACDF =ABEF = ADEG = CEFG = BDFG.5) 7–4 The design resolution of F is IV. we get the estimates for all main factors and 1/2(li – l¢i) will lead to the estimates for the combined two-factor interactions. The effects are estimated using contracts.166 Applied Design of Experiments and Taguchi Methods Similarly.2 7–4 Fold-over Design for 2III The problem is concerned with an arc welding process. The following factors and levels have been used.15.12.14. the alias structure for the fold-over fraction is derived and is given in Table 7. The experiment was planned to find out the process parameter levels that maximize the welding strength. The experiments in the two fractions have been run separately and the response (weld strength) for the first fraction and fold-over design (second fraction) measured is tabulated in the form of coded data in Table 7. we can estimate the effects of both the fractions. TABLE 7. CA = – def + afg – beg + abd – cdg + ace – bcf + abcdefg = – 47 – 9 – (–27) – 13 – (–16) – 22 – (–5) + 39 = –4 Effect of A = CA n2 k 1 = 4 1/16(26 ) = –1. ILLUSTRATION 7.16. These are given in Table 7. Factor Type of welding rod (A) Weld material (B) Thickness of welding material (C) Angle of welded part (D) Current (E) Welding method (F) Preheating (G) Low level X 100 SS41 8 mm 60°C 130 A Single No heating High level Y 200 SB35 12 mm 70°C 150 A Weaving 150°C These seven factors have been assigned to the seven columns of the fold-over design given in Table 7.00 Similarly. .14 The alias structure for the fold-over fraction (F2) l¢A ® A – BD – CE – FG l¢B ® B – AD – CF – EG l¢C ® C – AE – BF – DG l¢D ® D – AB – CG – EF l¢E ® E – AC – BG – DF l¢F ® F – BC – AG – DE l¢G ® G – CD – BE – AF By combining the fold-over fraction with the original as 1/2(l i + l¢i). 25 ® D – AB – CG – EF l¢E = 17.17 i A B C D E F G 1/2(l i + l¢i) A B C D E F G = = = = = = = 0.625 3.16 First fraction l¢A = 1.375 DF = 1.625 De-aliased effects 1/2(li – l ¢i) BD + CE AD + CF AE + BF AB + CG AC + BG BC + AG CD + BE + + + + + + + FG = –1.125 EG = 0.75 ® B – AD – CF – EG l¢C = –2.Two-level Fractional Factorial Designs 167 TABLE 7.15 Weld strength data (coded value) for Illustration 7.2 Second fraction (fold over) Response 47 –9 –27 –13 –16 –22 –5 39 Run 1¢ 2¢ 3¢ 4¢ 5¢ 6¢ 7¢ 8¢ Treatment combination adcg bcde acdf cefg abef bdfg adeg (1) Response –10 37 –13 –28 –28 –13 45 –7 First fraction Run 1 2 3 4 5 6 7 8 Treatment combination def afg beg abd cdg ace bcf abcdefg TABLE 7.875 18.17. The aliases for these two are . TABLE 7.25 ® E – AC – BG – DF l¢F = –36.25 ® A – BD – CE – FG l¢B = –2.25 ® G – CD – BE – AF By combining the two fractions we can obtain the de-aliased effects for the main factors and the two-factor interactions together as given in Table 7.25 ® F – BC – AG – DE l¢G = 2.125 –1.125 –2.17.0 ® E + AC + BG + DF l¢F = 37.0 ® G + CD + BE + AF Estimate of effects for Illustration 7.75 ® C – AE – BF – DG l¢D = 32.625 0.875 AF = 1.5 ® D + AB + CG + EF l¢E = 20.625 29.5 ® B + AD + CF + EG l¢C = –0.375 From Table 7.2 Second fraction (fold over) l¢A = 1. it is observed that factors D and E have large effects compared to others.625 DG = 1.375 DE = 36.0 ® A + BD + CE + FG l¢B = –1.125 EF = –2.5 ® F + BC + AG + DE l¢G = 5.5 ® C + AE + BF + DG l¢D = 27. 7. Construct the design and analyse the data. we can conclude that the interaction DE is significant. the combination D2 E2 is selected. Suppose it was possible to run a one-half fraction of the 24 design.2 Consider Problem 5. 7. Average response for DE interaction combinations is D1E 1: D1E 2: D2E 1: D2E 2: 1/2(–9 + –5) = 1/2(–27 + –22) 1/2(–13 + –16) 1/2(47 + 39) = –7.0 The objective is to maximize the weld strength. any level (low or high) can be used. C and G do not have significant effect and not involve in any two-factor significant interactions. Since BC and AG can be assumed as negligible. Determine the alias structure.0 = –24. What is the resolution of this design? 7. Suppose it was possible to run a one-half fraction of the 24 design. the optimal levels for D and E should be determined considering the response of DE interaction combinations (D1E1. PROBLEMS 7. D2E1 and D2E2). For other factors. Hence.168 Applied Design of Experiments and Taguchi Methods D = AB = CG = EF E = AC = BG = DF If it is assumed that the factors A.5.6. the fold-over designs is useful to arrive at the optimal levels.17). B. Use F = ABCD and G = ABDE as generators. Give the alias structure. But usually. Write down the alias structure. the levels are fixed as discussed above. The above reduces to D = EF E = DF The two-factor interaction combinations involving EF and DF have a small effect indicating that they are not significant (Table 7. Since the other factors are insignificant. So.5 Design a 26–2 fractional factorial with I = ABCE and I = BCDF as generators.5 43.3 Construct a 25–2 design with ACE and BDE as generators. .1 Consider the first replicate (R1) of Problem 5.5 = –14. 7. D1E2. Construct the design and analyse the data. Outline the analysis of variance table. we select the level for insignificant factors based on cost and convenience for operation.4 Construct a 27–2 design. the interaction BC + AG + DE has a large effect. Thus. But. That is.1 INTRODUCTION In experimental investigation we study the relationship between the input factors and the response (output) of any process or system. The purpose may be to optimize the response or to understand the system. 55 Response 50 45 140 40 60 65 Temperature (A) 70 100 120 Pressure (B) FIGURE 8.1 A response surface.C H A P T E R 8 Response Surface Methods 8. then the surface is represented by hÿ = f(X1. X2) + e where e is the observed error in the response Y. 169 . Suppose we want to determine the levels of temperature (X1 ) and pressure (X2) that maximize the yield (Y) of a chemical process. X2) An example of a response surface is shown in Figure 8. Response Surface Methodology (RSM) can be used to study the relationship.1. If the input factors are quantitative and are a few. X2) = h. the process yield is a function of the levels of temperature and pressure which can be represented as Y = f( X1. If the expected response E(Y) = f(X1. This analysis is generally performed using computer software. 8.2 Central Composite Design (CCD) The central composite design for k factors include: (i) nF factorial points (corner/cube points) each at two levels indicated by (–1. a sequential experimentation strategy is followed. Such designs are called response surface designs.170 Applied Design of Experiments and Taguchi Methods In RSM.2. Y = C0 +  Ci Xi +  Cii X i2 +   Cij X i X j + e i 1 i 1 i< j K K (8.2. Some of the efficient designs used in second-order experiment are the Central Composite Design (CCD) and the Box–Behnken Design. This design has already been discussed in Chapter 5. The objective of RSM is to determine the optimum operating conditions. These designs should allow minimum number of experimental runs but provide complete information required for fitting the model and checking model adequacy. (iii) 2k axial points (star points) . We will not obtain error estimate in these designs unless some runs are replicated. This involves the search of input factor space by using first-order experiment followed by a second-order experiment. + bkXk + e (8. 8..1 Designs for Fitting First-order Model Suppose we want to fit a first-order model in k variables Y = C0 +  Ci Xi + e i 1 k (8. higher order polynomial.2 RESPONSE SURFACE DESIGNS Selecting an appropriate design for fitting and analysing the response surface is important. The response surface analysis is then performed using the fitted surface.. The first order equation is given by Y = b0 + b1X1 + b2 X2 + . 8.2) The method of least squares is used to estimate the parameters (discussed in Chapter 2). Here the low and high levels of the factors are denoted by ±1 notation. The model parameters can be estimated effectively if proper experimental designs are used to collect data. A common method of including replications is to take some observations at the center point. +1) (ii) nC centre points indicated by 0.1) If there is curvature in the system.3) These designs include 2k factorial and fractions of 2k series in which main effects are not aliased with each other. usually a second-order model given below is used. Eq. Determining the a value for the axial points Choice of factorial portion of the design: A CCD should have the total number of distinct (k + 1) (k + 2) 2 design points N = nF + 2k + 1.1 k 2 2 3 3 4 4 4 (k + 1) (k + 2)/2 6 6 10 10 15 15 15 Selection of CCD with economical run size N 7 9 11 15 17 20 25 nF 2 4 4 8 8 11 16 Factorial portion (cube points) 22–1(I = AB) 22 3–1 2III (I = ABC) 3 2 4–1 2III (I = ABD) 11 ´ 4 sub matrix of 12 run PB design* 24 *Obtained by taking the first four columns and deleting row 3 of the 12-run Plackett–Burman design. the choice of a depends on the geometric nature of and practical constraints on the design region. k = number of factors. 2k = number of star points (k + 1) (k + 2) 2 = smallest number of points required for the second order model to be estimated. A design that produces contours of constant standard deviation of predicted response is called rotatable design. Choosing the factorial portion of the design 2. Table 8. This design is often called spherical CCD. the best choice of a = k . Unless some practical considerations dictate the choice of a. a value In general a value is selected between 1 and Choice of k . Further discussion on small composite designs on the Placket–Burman designs can be found in Draper and Lin (1990). If the region of interest is spherical. (8.4) This design provides good predictions throughout the region of interest. . In general. must be at least where.1 can be used to select a CCD with economical run size for 2 £ k £ 4. TABLE 8. A CCD is made rotatable by selecting a = (nF)1/4 (8. Number of center points 3.4) serves as a useful guideline. nF = number of factorial points.Response Surface Methods 171 In selecting a CCD. the following three issues are to be addressed: 1. 1 2 1.172 Applied Design of Experiments and Taguchi Methods Number of runs at the center point The rule of thumb is to have three to five center points when a is close to k . –1.2 Run Central composite design for k = 2 Factors A B –1 –1 1 1 0 0 –1.14 1. The CCD for k = 2 and k = 3 with five centre points is given in Tables 8. 0 2. Further discussion on choice of a and the number of runs at the center point can be found in Box and Draper (1987). –1 0. 0 –1.14 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 (a = k and nc = 5) –1 1 –1 1 –1.3 respectively. If the purpose is to obtain the error it is better to have more than 4 or 5 runs. – 2 1.41 0 0 0 0 0 0 0 . 0 0. –1 FIGURE 8.41 1. The graphical representation of CCD for 2 factors is shown in Figure 8. 0.2. 1 – 2.2 and 8.2 The central composite design for two factors. TABLE 8. 3 Run Central composite design for k = 3 Factors B –1 –1 1 1 –1 –1 1 1 0 0 –1. B.4. The advantage of this design is that each factor requires only three levels. Similarly. C in the experiment.73 1. These were developed by combining two-level factorial designs with balanced or partially balanced incomplete block designs.2.Response Surface Methods 173 TABLE 8. .73 0 0 0 0 0 0 0 0 0 C –1 –1 –1 –1 1 1 1 1 0 0 0 0 –1.73 0 0 0 0 0 0 0 A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 (a = –1 1 –1 1 –1 1 –1 1 –1. The two crosses (Xs) in each block are replaced by the two columns of the 22 factorial design and a column of zeros are inserted in the places where the cross do not appear.73 1. designs for k = 4 and k = 5 can be obtained. A balanced incomplete block design with three treatments and three blocks is given by Block 1 1 2 3 X X Treatments 2 3 X X X X The three treatments are considered as three factors A.73 1. This procedure is repeated for the next two blocks and some centre points are added resulting in the Box-Behnken design for k = 3 which is given in Table 8. The construction of the smallest design for three factors is explained below.3 Box–Behnken Designs Box and Behnken (1960) developed a set of three-level second-order response surface designs.73 0 0 0 0 0 k and nc = 5) 8. The experimental designs used to develop the regression models must facilitate easy formation of least squares normal equations so that solution can be obtained with least difficulty.4 Run 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Box–Behnken design for k = 3 A –1 –1 1 1 –1 –1 1 1 0 0 0 0 0 0 0 B –1 1 –1 1 0 0 0 0 –1 –1 1 1 0 0 0 C 0 0 0 0 –1 1 –1 1 –1 1 –1 1 0 0 0 8. 8. X3 are the coded variables with ±1 as high and low levels of the factors A. Suppose we have the following three factors studied each at two levels in an experiment. Fitting of a polynomial model can be treated as a particular case of multiple linear regression.3.5) where X1.1 Analysis of First-order Design Let us consider a 23 factorial design to illustrate a general approach to fit a linear equation. X2.174 Applied Design of Experiments and Taguchi Methods TABLE 8. Factors Low (–1) Time (min) (A) Temperature (B) Concentration (%) (C) Let the first-order model to be fitted is 6 240°C 10 Level High (+) 8 300°C 14 ˆ =C +C X +C X +C X Y 0 1 1 2 2 3 3 (8.3 ANALYSIS OF DATA FROM RSM DESIGNS In this section we will discuss how polynomial models can be developed using 2k factorial designs. B and C respectively. . 5 Run X1 (1) a b ab c ac bc abc –1 1 –1 1 –1 1 –1 1 Data for fitting first-order model Factors X2 –1 –1 1 1 –1 –1 1 1 X3 –1 –1 –1 –1 1 1 1 1 61 83 51 70 66 92 56 83 Y The data is analysed as follows. temperature etc. TABLE 8. In order to facilitate easy computation of model parameters using least-square method. we have X1 = Time  7 1. h = value of high level of the factor l = value of low level of the factor Accordingly.0 Suppose we have the following experimental design and data (Table 8. a column X0 with +1s are added to the design in Table 8. X3 = Concentration  12 2.5 as given in Table 8.Response Surface Methods 175 The relation between the coded variable (X) and the natural variables is given by V  (h + l )/2 (h  l )/2 (8.6.0 . X2 = Temperature  270 30 .6 X0 1 1 1 1 1 1 1 1 First-order model with three variables X1 –1 1 –1 1 –1 1 –1 1 X2 –1 –1 1 1 –1 –1 1 1 X3 –1 –1 –1 –1 1 1 1 1 y 61 83 51 70 66 92 56 83 . TABLE 8.6) V = natural/actual variable such as time.5). 25 1  CA    2  n 2 k 1  C1 = where.50 ..75 (Effect of A) = Similarly.176 Applied Design of Experiments and Taguchi Methods Obtain sum of product of each column with response y.25 + 11. CC = 32 Therefore. SSTotal = SSLinear model + SSLack of fit These sum of squares are computed as follow: SSTotal =  yi2 – CF i =1 N (8. CB = – 42. That is.8) = {(61)2 + (83)2 + .7) Analysis of variance: Usually. the total sum of squares is partitioned into SS due to regression (linear model) and residual sum of squares. Therefore.0 – 39480. the regression coefficients can be estimated as follows: C0 = y = 562 8 1 2 = 70.5) is simply a 23 design.0 X3 Y (8.. CA = 94.5 = 1475. y = grand mean CA = contrast A = 94 n = number of replications = 1 b1 = 94/8 = 11.25 and b3 = 32/8 = 4. 0y = SX0 y = 562 (Grand total) 1y = SX1 y = 94 2y = SX 2 y = – 42 3y = SX3 y = 32 This design (Table 8.00 The fitted regression model is ˆ = 70.75 X1 – 5. b2 = – 42/8 = –5. the experiment is not replicated and hence pure SS is zero. And SS residual (SSR) = Experimental error SS or pure SS + SS lack of fit In this illustration. + (83)2 }  (562)2 8 = 40956.25 X2 + 4. Hence these sums are nothing but contrast totals. As these types of designs do not provide any estimate of experimental error variance.50 1475. To obtain the experimental error either the whole experiment is replicated or some center points be added to the 2k factorial design.02 When tested with the lack of fit. These observations are used to measure experimental error and will have nc – 1 degrees of freedom. Suppose nc number of center points are added.5 + 220.00 22.5 (Since n = 1 and k = 3) (32) 2 8 Similarly. ( YF – YC ) gives an additional degree of freedom for measuring the lack of fit. In practice we need experimental error when first-order or second-order designs are used. Otherwise the experimenter does not know whether the equation adequately represents the true surface. Addition of center points do not alter the estimate of the regression coefficients except that b0 becomes the mean of the whole experiment. The sum of squares for lack of fit is given by nc nF nc + nF (YF  YC ) 2 (8. the lack of fit cannot be tested.5 + 128. This can be computed from the contrast totals. The effects are estimated using contrasts.63 F0 86.7. SSB = (  42) 2 8 = 220.8.Response Surface Methods 177 SSModel = All factor sum of squares. TABLE 8. the linear model is a good fit to the data.9) where nF is the number of factorial points. which is given below: SSA = (C A )2 n2 k = (94)2 8 = 1104.33 5.0 = 1453 The ANOVA for the first-order model is given in Table 8. The effects and coefficients for the first-order model are summarized in Table 8. .7 Source Linear model Lack of fit (residual) Total ANOVA for the first-order model Sum of squares 1453.0 SSLinear model = 1104. Note that SS lack of fit = SSAB + SSAC + SSBC + SSABC. A significant lack of fit indicates the model inadequacy. If YC is the mean of these observations and YF is the mean of factorial observations.5 and SSC = = 128.50 Degrees of freedom 3 4 7 Mean square 484. 50 0.5). the surface plot of the response is a plane (Figure 8.5. (8.25 4.178 Applied Design of Experiments and Taguchi Methods TABLE 8.50 3. This is also evident from the contour plot of response (Figure 8. A FIGURE 8.50 0. The response surface can be analysed using computer software to determine the optimum condition.00 Coefficient 70.7)].00 – 0.6).5. it is evident that high value of time and concentration and low value of temperature produces high response.00 1.50 8. .8 Effects and coefficients for the first-order model Effect estimate 23.75 –5. Because the fitted model is first order (includes only main effects).50 Term in the model Constant Time (A) Temperature (B) Concentration (C) AB AC BC ABC Figures 8.3 and 8.00 1. From the regression model [Eq.3 Normal plot of the effects for the first order design given in Table 8.4 show the normal and half-normal plot of the effects respectively for the first-order design given in Table 8.50 –10.25 11.25 1.00 – 0. .Response Surface Methods 179 FIGURE 8.5.0 6.0 7.5 7.5 7.0 6.5.5 8.0 Concentration * Temperature Concentration * Time 14 13 12 FIGURE 8. Temperature * Time 298 286 274 262 250 6.0 7.5 Contour plots for the first-order design given in Table 8.0 11 10 6.5 8.4 Half-normal plot of the effects for the first-order design given in Table 8. (8.12)] Similarly. ÿ x = natural variable AH = value of high level of factor A AL = value of low level of factor A ÿ Suppose.2 Analysis of Second-order Design The general form of a second-order polynomial with two variables is: 2 2 Y = C0 + C1 X1 + C2 X2 + C11 X1 + C22 X2 + C12 X1 X2 + e (8. nF = 4. when Xi = 1. the coded variables (Xi) are converted into natural variables (temperature.2). time. we normally use CCD with two factors (Table 8. The CCD for two factors with data is given in Table 8.11) where. the factor level will be about 320°.5.12) Now when Xi = 0 (center point). For conducting the experiment.180 Applied Design of Experiments and Taguchi Methods 80 Y 70 60 50 6 7 Time 8 300 275 250 Temperature 90 Y 80 70 60 6 7 Time 8 10 14 12 Concentration 65 Y 50 55 50 250 275 300 Temperature 10 14 12 Concentration FIGURE 8.6 Surface plots of response for the first order design given in Table 8.10) In order to estimate the regression coefficients. 8. the factor A is the temperature with low level = 240° and high level = 300°.9. the level of the factor is set at 270° [from Eq.) using the following relation: Xi = Y  (AH + AL )/2 (AH  AL )/2 (8. Xi = Temperature  270 30 (8. .682 (the axial point).3. pressure etc. Then. This design has k = 2. nC = 5 and a = 2 . 125 (iy ) ÿ ÿ bii = 0.4 7.2 (0y ) – 0.0 9. 11y = SX1 y.5 8.3 10.Response Surface Methods 181 Following this procedure. 2y = SX2 y.0 9. Fitting the model 2 2 Step 1: To the CCD (Table 8. 2 2 0y = SX0 y.125 (iiy ) + 0.5 5. Step 2: Obtain sum of product of each column with y.414 0 0 0 0 0 2 X1 2 X2 X1X2 1 –1 –1 1 0 0 0 0 0 0 0 0 0 y 6.10 S iiy where.25 ( ijy ) ÿ The quantity Siiy is sum of cross products of all the squared terms with y.8 1 1 1 1 2 2 0 0 0 0 0 0 0 1 1 1 1 0 0 2 2 0 0 0 0 0 .6 8. 22y = SX2 y and 12y = SX1X2 y Step 3: Obtain regression coefficients using the following formulae: b0 = 0. X1X2 and y (response) to the right side as given in Table 8.9 X0 1 1 1 1 1 1 1 1 1 1 1 1 1 Data for central composite design with two factors X1 –1 1 –1 1 –1. 1y = SX1 y.414 0 0 0 0 0 0 0 X2 –1 –1 1 1 0 0 –1.0 7.00 TABLE 8.7 9.0 8.2) add column X0 to the left and X1 .5 9.20 1y = SX1 y = 7.01875 Siiy – 0.10 (0y) ÿ ÿ b ij = 0.9 10. 2 2 In this model Siiy is SX1 y + SX2 y For the data shown.414 1.12 2 y = 59 11y = SX1 2 22y = SX2 y = 62 12y = SX1X 2 y = 1.9. That is.96 2y = SX2 y = 4. Denote these sum of products as (0y) for the X0 column. the parameters of the model are estimated as follows: 0y = SX0 y = 110. X2 . 1y for X1 column and so on. Siiy = 11y + 22y ÿ ÿ bi = 0. the experimental design can be converted into an experimental layout.414 1. 10 (0y) = 0.8 show the contour plot and surface plot of response respectively for the Central Composite Design (CCD) with two factors A and B discussed above.25 (ijy) = 0.0 FIGURE 8.125 (7.2 (110.2) = –1.13) Equation (8.96) = 0.25 Therefore.515 X – 1.5 1.38 X 2 – 1.0 –1.00 b12 = 0.0 A 0.10 (110.125 (62) + 0.00 X 2 + 0.182 Applied Design of Experiments and Taguchi Methods The regression coefficients are estimated as follows: b0 ÿ b1 ÿb2 b11 ÿ = 0. Figure 8.995 X + 0.7 Contour plot of response for CCD with two factors.10 (110.20) – 0. Usually. the fitted model is ˆ = 9.01875 (121) – 0.5 8–9 9–10 > 10 –1.2) = –1.0 6–7 7–8 –0.125 (4.125 (59) + 0.2 (0y) – 0.10 (59 + 62) = 9.125 (iiy) + 0.01875 (121) – 0. we use computer software for studying these designs.12) = 0.515 = 0.0 –0.38 b22 = 0.7 and 8. .13) can be used to study the response surface.01875 S iiy – 0.5 0.94 + 0.0 Y 0.125 (1y) = 0.00) = 0.25 (1.94 = 0. 1. The software gives the model analysis including all the statistics and the relevant plots.10 S iiy = 0.125 (2y) = 0.995 = 0.25 X X Y 1 2 1 2 1 2 (8.5 <4 4–5 5–6 B 0. 828 0 0 0 0 0 0 . Usually.Response Surface Methods 183 10 8 Y 6 1 4 –1 A –1 1 0 0 B FIGURE 8. Note that y in Table 8.828 2.10.8 Surface plot of response for CCD with two factors.828 2.682 1.682 0 0 0 0 0 0 0 0 X3 –1 –1 –1 –1 1 1 1 1 0 0 0 0 –1.10 X0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 X1 –1 1 –1 1 –1 1 –1 1 –1.828 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 2.828 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 2.682 0 0 0 0 0 0 Data for CCD with three factors 2 X1 2 X2 2 X3 X 1X 2 1 –1 –1 1 1 –1 –1 1 0 0 0 0 0 0 0 0 0 0 0 0 X1X3 1 –1 1 –1 –1 1 –1 1 0 0 0 0 0 0 0 0 0 0 0 0 X2X3 1 1 –1 –1 –1 –1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 y 1 1 1 1 1 1 1 1 2.10 is the response column to be obtained from the experiment.682 1. TABLE 8.682 1. we use computer software for solution of these models which will give the fitted model including ANOVA and other related statistics and generate contour and response surface plots.682 0 0 0 0 0 0 0 0 0 0 X2 –1 –1 1 1 –1 –1 1 1 0 0 –1. CCD with three variables Central Composite Design with three variables (factors) is given in Table 8.828 2. 184 Applied Design of Experiments and Taguchi Methods PROBLEMS 8.1 Factors X1 –1 1 –1 1 –1 1 –1 1 X2 –1 –1 1 1 –1 –1 1 1 X3 –1 –1 –1 –1 1 1 1 1 Y 31 43 34 47 45 37 50 41 8.2 The data given in Table 8.12 have been collected from a Central Composite Design with two factors. TABLE 8.414 0 0 0 0 0 0 0 Data for Problem 8.414 1.2 X2 –1 –1 1 1 0 0 –1.12 X1 –1 1 –1 1 –1.11 Data for Problem 8.1 Use the data from Table 8.414 0 0 0 0 0 y 34 26 13 26 30 31 18 30 20 18 23 22 20 .414 1. Fit a second-order model. TABLE 8.11 23 design and fit a first-order model. Taguchi Methods PART II . Whoever pays the loss. pollution. noise.1 INTRODUCTION Quality has been defined in different ways by different experts over a period of time. The quality of a product is measured in terms of the features and characteristics that describe the performance relative to customer requirements or expectations. the loss will be borne by the manufacturer and after warranty it is to be paid by the customer. The following are the types of loss: · · · · · Product returns Warranty costs Customer complaints and dissatisfaction Time and money spent by the customer Eventual loss of market share and growth Under warranty. 9.2 TAGUCHI DEFINITION OF QUALITY The quality of a product is defined as the loss imparted by the product to society from the time the product is shipped to the customer. ultimately it is the loss to the society. A truly high quality product will have a minimal loss to society. repair. the product must be delivered in right quality at right time at right place and provide right functions for the right period at right price. 187 .C H A P T E R 9 · · · · Quality Loss Function 9. The true evaluator of quality is the customer. To satisfy the customer. etc. Some of them are as follows: Fitness for use Conformance to specifications Customer satisfaction/delight The totality of features and characteristics that satisfy the stated and implied needs of the customer The simplest definition of high quality is a happy customer. variation in performance. The loss may be due to failure. the quality characteristic will have a target. the quality loss increases on either side of T. Smaller–the better: It is a non negative measurable characteristic having an ideal target as zero. Similarly. Nominal–the best: When we have a characteristic with bi-lateral tolerance. Larger–the better: It is also a non negative measurable characteristic that has an ideal target as infinity (¥). Taguchi defined this relation as a quadratic function termed Quality Loss Function (QLF). force. length. This loss is attributed to the variation in quality of performance (functional variation). The quality characteristic is the object of interest of a product or process. Y = value of the quality characteristic (e. For example: Tyre wear. as Y deviates from the target T.g. process defectives. Thus. the quality loss function is given by L ( Y ) = K( Y – T ) 2 (9. That is. confirmation to specification is an inadequate measure of quality.. pollution. From Figure 9. For example: Fuel efficiency. 9. It can be related to the product quality characteristics. etc. Taguchi quantified this loss through a quality loss function.01 mm has the nominal value of 10 mm. the loss is ‘0’. the nominal value is the target. if all parts are made to this value.3 TAGUCHI QUALITY LOSS FUNCTION The loss which we are talking about is the loss due to functional variation/process variation.1) where. When Y = T. . the variation will be zero and it is the best.1. For each quality characteristic. there exist some function which uniquely defines the relation between economic loss and the deviation of the quality characteristic from its target. etc. it can be observed that. Here the nominal value is 230 V. diameter etc. The quality loss is due to customer dissatisfaction. strength values. For example: A component with a specification of 10 ± 0.1 Quality Loss Function (Nominal—the best case) When the quality characteristic is of the type Nominal–the best. Generally.) L(Y) = loss in ` per product when the quality characteristic is equal to Y T = target value of Y K = proportionality constant or cost coefficient which depends on the cost at the specification limits and the width of the specification Figure 9.188 Applied Design of Experiments and Taguchi Methods 9. There are three types of targets. Note that a loss of A0 is incurred even at the specification limit or consumer tolerance (D0). if the supply voltage has a specification of 230 ± 10 V.3. And QLF is a better method of assessing quality loss compared to the traditional method which is based on proportion of defectives.1 shows the QLF for nominal—the best case. 00 Thus.00(Y – 230)2 At Y = T(230).2) Therefore. the loss is zero. Suppose the supply voltage for a compressor is the quality characteristic (Y). L (Y ) = A0 '2 0 (9. The specification for voltage is 230 ± 10 V. . Target voltage = 230 V Consumer tolerance (D0) = ±10 V.00(235 – 230)2 = ` 125.3) If A0 = ` 500 at a deviation of 10 V.Quality Loss Function 189 FIGURE 9. then the cost coefficient K= 500 (10 V)2 = ` 5/V2 And L(Y) = 5. For a 5 V deviation.1 QLF for nominal—the best case. the target value. Let A0 = average cost of repair/adjustment at a deviation of D0 L ( Y ) = K( Y – T ) 2 K= L (Y ) (Y  T ) 2 At the specification. L(235) = 5. we can estimate the loss for any amount of deviation from the target. K = A0 '2 0 ( Y – T) 2 (9. . Yi values where. (9. . .9) .. 3..1). 2.. Then we will have Y1.4) = (9. L(Y) = K(MSD) MSD = [(Y1  T )2 + (Y2  T )2 + . L(Y) = K(MSD) L(Y) = K [T 2 + (Y  T )2 ] Note that the loss estimated by Eq. where.6) = On simplification. (9.8) is the average loss per part/product (r) Total loss = r(number of parts/products) (9. n Let the average of this sample = Y .1) is for a single product.7) sn = population standard deviation (Y  T )2 = bias of the sample from the target. For practical purpose we use sample standard deviation (sn–1) Since..190 Applied Design of Experiments and Taguchi Methods Estimation of average quality loss: The quality loss we obtain from Eq.8) 9. Therefore..2 Quality Loss Function (Smaller—the better case) For this case the target value T = 0 Substituting this in Eq.. 1 n  (Yi  T )2 MSD = 2 Tn + (Y  T )2 (9. The quality loss is modelled as follows: L( Y ) = K ( Y – T) 2 = K (Y1  T )2 + K (Y2  T )2 + ... + K (Yn  T )2 n K [(Y1  T )2 + (Y2  T )2 + ..5) The term in the parenthesis is the average of all values of (Yi – T)2 and is called Mean Square Deviation (MSD).3. + (Yn  T )2 ] n (9. Suppose we take a sample of n parts/products.. we get L(Y) = K(Y – 0)2 = K Y2 (9. i = 1. The quality loss estimate should be an average value and should be estimated from a sample of parts/products.. This Y may or may not be equal to the target value (T). Y2. (9. + (Yn  T )2 ] n (9. + + +   2 n  Y22 Yn2   Y1  . That is  1  L ( Y) = K  2  Y  (9.9). The average loss per part/product can be obtained by substituting T = 0 in Eq. The loss per part/product averaged from many sample parts/products is given by L(Y ) = K  1  1 1 1  ..3. Therefore..12) and K = L( Y ) Y2 = A0 '2 0 (9. (9. a larger–the better characteristic can be considered as the inverse of a smaller– the better characteristic. L(Y) = K [T 2 + (Y  T )2 ] = K [T 2 + (Y  0) 2 ] = K (T 2 + Y 2 ) (9.8).2 QLF for smaller—the better case. A0 Loss ( ) C 0 %0 USL Y FIGURE 9.13) Figure 9. (9.3 Quality Loss Function (Larger—the better case) Mathematically.3 shows the quality loss for the case of larger–the better type of quality characteristic.11) 9. we can obtain L(Y) for this case from Eq.Quality Loss Function 191 (9.10) and K = L (Y ) Y 2 = A0 2 '0 Figure 9.2 shows the loss function for smaller—the better type of quality characteristic. 225. 250. 225. .1 Brand A Brand B Data for Illustration 9. 240. 240. 230. The specification for the voltage of the compressor of a specific capacity is 230 ± 20 V. 240. 250. 250. 250. 255. =K  1  1  2 n   Yi     (9. 230 Determine which brand of compressors should be purchased. The consumer loss at the specification was estimated to be ` 3000.1 (compressor voltage) 230. 245. 210. 230. 235.3 Y QLF for larger—the better type of quality characteristic. The output voltage data collected for 20 compressors are given in Table 9. 230. 250 235. 230. 235. 245. 212.14). 245.1.192 Applied Design of Experiments and Taguchi Methods A0 Loss ( ) C O %0 LSL FIGURE 9. 245. 240. 230. 230. 240.14) = K(MSD) On simplification of Eq. 215. 215. 230. 245. This includes repair/replacement cost and other related cost based on customer complaints. 245.1 Nominal–the Best Case A company manufactures coolers of various capacities in which the compressor forms the critical item.15) ILLUSTRATION 9. TABLE 9. (9. the average loss per part/product will be given by L (Y ) = K  3T 2  1 +   Y2  Y2    (9. 235. The company purchases two types of compressors (Brand A and Brand B). 240. 230. 255. 212. 0 MSD 202.4 140.00 Mean ( Y ) 232.2.Quality Loss Function 193 SOLUTION: Target value (T) = 230 V Consumer tolerance (D0) = ± 20 V Consumer loss (A0) = ` 3000 L ( Y ) = K ( Y – T) 2 3000 = K(20)2 K = 7.00 1050. L(Y) = KY2 K = A0 '2 0 100 (2) 2 [from Eq.0 L( Y) 1518.56 59. TABLE 9.2 239. . ILLUSTRATION 9. n = 20 and T = 230 The mean and standard deviation values computed for the two brands of compressors and the average loss per compressor are given in Table 9. it is recommended to purchase Brand B compressor.5 [T 2 + (Y  T )2 ] where.0 Since the average loss per compressor is less for Brand B.2 Smaller–the Better Case The problem is concerned with the manufacturing of speedometer cable casings using the raw material supplied by two different suppliers.2 Brand A B Average loss per compressor (Illustration 9. Y = 6 Yi n and T2 = 6 Yi2  nY 2 n  1 . The specified shrinkage (D0) = 2% Estimated consumer loss (A0) = ` 100 The loss function.5 L(Y) = K(MSD) = 7.10)] = = 25 The data collected on 20 samples from the two suppliers is given in Table 9.1) Variance (s 2) 197. (9.3. The output quality characteristic is the shrinkage of the cable casing. 820.2 (Percentage shrinkage) 0. 0.3 (Life of bulbs in hours) 860.18. 0. L(Y) = K(1/Y2) .22.001055 L( Y) 1. 0. Hence. 780. 600.24. 0. 650.4.20.11). 750 750. 690.32. 0.20.11.32. 0.16. 0.15. 880. 0. 690. 920.18.10.16.18. 750.3 Larger–the Better Case This problem is concerned with the comparison of the life of electric bulbs of same wattage manufactured by two different companies. 0. 0. 0.28.3 Supplier 1 Supplier 2 Data for Illustration 9. SOLUTION: The loss function. 650. 750.22.15. 0. 0.19.4 Supplier 1 2 Average loss per cable (Illustration 9. 0. 820. 750. 0. 900.12. 870. 0.17 The average loss per casing is computed from Eq.28. 780. 910. TABLE 9. 0.750. 0. 650. 0. 0. 780.18. (9. 0. and the average loss per cable is given in Table 9.19.20.24. ILLUSTRATION 9. 780. The specified life (D0) = 1000 hr Consumer loss (A0) = ` 10 The data collected is given in Table 9. 0.5.26 0.20.11.19. 0. 0. 750. 800.19. 0. 670. 760.003424 0.2) Variance (s 2) 0.32. 0. 650. 810. 0. 900. 0. 0. 0. 940. 0.20. TABLE 9. 680. 0. 0. 910.478 0.26. 0. The quality characteristic (Y) is the life of the bulb in hours.12.32. 0. 650. 700 Determine the average loss per bulb for the two companies. 0.627 Mean square ( Y 2) 0. 2 2 L(Y) = K (T + Y ) 2 2 = 25 (T + Y ) The mean. 810.10. Supplier 2 is preferred. 910. 0. 0. 880. 580.5 Company 1 Company 2 Data for Illustration 9.20.04025 The average loss per casing is less with Supplier 2.17.194 Applied Design of Experiments and Taguchi Methods TABLE 9. variance.15.055696 0.15. K  3T 2  1 +   Y2  Y2    L( Y ) = The computed values are summarized in Table 9.4.4 ESTIMATION OF QUALITY LOSS 9. TABLE 9. Company 2 is preferred.00 6.28. L(Y) 18.6 Company 1 2 Average loss per product (Illustration 9.13)] = 10(1000)2 = 10 ´ 106 The average loss per product is computed from Eq.68.15) given below.3) Variance (s 2) 10.25 Since the average loss is less with Company 2.6 16.2 Quality Loss Function Method As already discussed.Quality Loss Function 195 K = A0 ' 2 0 [Eq.1 Traditional Method Usually in manufacturing companies.4.056.45 Mean square ( Y 2) 5. it is assumed that all parts with in the specification will not have any quality loss. In this. the quality loss is estimated by considering the number of defects/defectives.6.894. Depending on the type of quality characteristic. (9.75 Average loss per product. (9.516. Suppose we have a nominal–the best type of quality characteristic. this method is based on dispersion (deviation from target). 9.00 6918. Loss by defect = Proportion out of specification ´ number of products ´ cost per product 9. Loss by dispersion = A0 '2 0 [T 2 + (Y  T )2 ] ´ number of products . quality loss is estimated using an appropriate quality loss function. The annual production is 50. T = 40 and D0 = 4).e. proportion out of specification above USL is 0.. to reduce the quality loss we need to reduce the dispersion (variation) there by improving quality. The proportion out of specification can be estimated from normal distribution as follows. The process average ( Y ) is centred at the target T = 40 with a standard deviation of 1. The consumer loss was estimated to be ` 20.33. Determine which process is economical based on quality loss. total proportion of product falling out of specification limits is 0.1 Two processes A and B are used to produce a part.0027 ´ 50.64 The specification for the part is 100 ± 10.000 = 30 4 2 = 1. Loss by defect = Proportion out of specification ´ number of parts ´ cost per part = 0.00 10.196 Applied Design of Experiments and Taguchi Methods ILLUSTRATION 9.0027 (0. The following data has been obtained from the two processes.875 (1..0 Therefore.00 13.000) = ` 1. So. Z USL = (USL  Y ) T = 44  40 1. .83 B 105.4 Comparison of Quality Loss A company manufactures a part for which the specification is 40 ± 4 mm (i.332 + (40 – 40)2] ´ 50. Similarly. proportion falling below LSL is 0. Note the difference between the two estimates.000 parts. Process Mean Standard deviation A 100. The Upper Specification Limit (USL) is 44 and the Lower Specification Limit (LSL) is 36.769) (50.e. The cost of repairing or resetting is ` 30 (i.00135. Thus. PROBLEMS 9.00135.000 ´ 30 = ` 4.33 = 3.844.27%).050. Loss by dispersion = A0 '2 0 [T 2 + (Y  T )2 ] ´ number of products [1.65. A0 = 30). 7 A B C D 800 950 860 750 850 750 650 690 900 850 780 880 Data on life of bulbs 950 700 880 810 700 790 780 910 800 800 910 750 750 790 820 780 900 900 650 680 800 950 750 900 750 900 920 870 9. If the output voltage lies outside the range of 230 ± 20 V. B.7.Quality Loss Function 197 9.3 Television sets are made with a desired target voltage of 230 volts. the customer incurs an average cost of ` 200.2 A customer wants to compare the life of electric bulbs of four different brands (A.00 per set towards repair/service. The data on the life of bulbs has been obtained for the four brands which are given in Table 9. C and D) of same wattage and price. The output characteristic for life (Y0) = 1000 hours and the consumer loss A0 = ` 10. Find the cost coefficient K and establish the quality loss function. . Which brand would you recommend based on quality loss? TABLE 9. Genich Taguchi. He advocated that the cost of quality should be measured as function of deviation from the standard (target). His quality philosophy is that quality should be designed into the product and not inspected into it.1 INTRODUCTION We know that a full factorial design is one in which all possible combinations of the various factors at different levels are studied. And for conducting so many experiments a number of batches of materials. the number of experiments would be prohibitively large. His second philosophy is that quality can be achieved by minimizing the deviation from the target value. construction of fractional replicate designs generally requires good statistical knowledge on the part of the experimenter and is subject to some constraints that limit the applicability and ease of conducting experiments in practice. results in heterogeneity and the experimental results tend to become inaccurate (results in more experimental error). Hence care must be taken that variations in the experimental material. While factorial experimentation is very useful and is based on strong statistical foundations. 10.2 TAGUCHI METHODS Dr. do not bias the conclusions to be drawn. background conditions. it becomes unmanageable in the usual industrial context. quality is not achieved through inspection which is a postmortem activity. With increase in the number of factors and their levels. statisticians have developed Fractional Replicate Designs (Fractional Factorial Designs) which were discussed in Chapter 7. a Japanese scientist contributed significantly to the field of quality engineering. different process conditions. That is. To address these issues. etc. Taguchi Methods/Techniques are · · · · Off-line Quality Assurance techniques Ensures Quality of Design of Processes and Products Robust Design is the procedure used Makes use of Orthogonal Arrays for designing experiments 198 .C H A P T E R 10 Taguchi Methods 10. However. And that product design should be such that its performance is insensitive to uncontrollable (noise) factors. etc. 1 System Design System design is the first step in the design of any product.3 ROBUST DESIGN Robust design process consists of three steps. The main advantage of these designs lies in their simplicity. 10. Genich Taguchi suggested the use of Orthogonal Arrays (OAs) for designing the experiments.2. TABLE 10. exploration and presentation of ideas. In functional design. They provide the desired information with the least possible number of trials and yet yield reproducible results with adequate precision. System Design.1 Development of Orthogonal Designs Dr. Parameter Design and Tolerance Design. namely. These methods are usually employed to study main effects and applied in screening/pilot experiments. He has also developed the concept of linear graph which simplifies the design of OA experiments. 10. Functional design is highly a creative step in which the designers creative ideas and his experience play a vital role. The functional design involves the identification of various sub tasks and their integration to achieve the functional performance of the end product.1 Comparison of number of experiments in full factorial and OA designs Number of levels 2 2 2 3 3 Number of experiments Full factorial Taguchi 8 128 32.3. The conceptual design is the creation.323 4 8 16 9 27 Number of factors 3 7 15 4 13 Note that the number of experiments conducted using Taguchi methods is very minimal.Taguchi Methods 199 10. easy adaptability to more complex experiments involving number of factors with different numbers of levels. The designer should also keep in mind the weight and cost of the product versus the functional performance of the product. usually a prototype design (physical or mathematical) is developed by applying scientific and engineering knowledge. This involves both the conceptual and functional design of the product. It is more about how a product should look like and perform.768 81 1. At the same time the conceived conceptual design must satisfy both the manufacturer and the customer.594. The resource difference in terms number of experiments conducted between OA experiments and full factorial experiments is given in Table 10. . These designs can be applied by engineers/scientists without acquiring advanced statistical knowledge.1. 1) m = mean value of Y in the region of experiment ai and bj = individual or main effects of the influencing eij = error term. such as A. Tolerances should be set such that the performance of the product is on target and at the same time they are achievable at minimum manufacturing cost. the effect of each factor can be linear. Let a and b are the effects of the factors A and B respectively on the response variable Y. In Taguchi Methods we generally use the term optimal levels which in real sense they are not optimal. 10.3 Tolerance Design Tolerance design is the process of determining the tolerances around the nominal settings identified in parameter design process. The optimal tolerances should be developed in order to minimize the total costs of manufacturing and quality. Taguchi pointed out that in many practical situations these effects (main effects) can be represented by an additive cause-effect model. One practical approach recommended by Taguchi is to run a confirmation experiment with factors set at their optimal levels. If the result obtained (observed average response) from the confirmation experiment is comparable to the predicted response based on the main effects model. Suppose we have two factors (A and B) which influence a process. . Statistically designed experiments and/or Orthogonal experiments are used for this purpose. Taguchi suggested the application of quality loss function to arrive at the optimal tolerances. Interactions make the effects of the individual factors non-additive.3. of course. it is assumed that interaction effects are absent. in the region of experimentation. the model will become multiplicative (non-additive). Under this assumption. In the additive model.4 BASIS OF TAGUCHI METHODS The basis of Taguchi methods is the Additive Cause-Effect model.200 Applied Design of Experiments and Taguchi Methods 10. The experimenter may not know in advance whether the additivity of main effects holds well in a given investigation. 10.3. Sometimes one is able to convert the multiplicative (some other non-additive model) into an additive model by transforming the response Y into log (Y) or 1/Y or Y . Then this model can be used to predict the response for any combination of the levels of the factors. factors A and B The term main effect designates the effect on the response Y that can be attributed to a single process or design parameter. but the best levels. quadratic or higher order. m + a i + bj + eij (10. The additivity assumption also implies that the individual effects are separable. When interaction is included. we can conclude that the additive model holds good. In Taguchi Robust design procedure we confine to the main effects model (additive) only. The additive model has the form Yij = where.2 Parameter Design Parameter design is the process of investigation leading to the establishment of optimal settings of the design parameters so that the product/process perform on target and is not influenced by the noise factors. State the problem Determine the objective Determine the response and its measurement Identify factors influencing the performance characteristic Separate the factors into control and noise factors Determine the number of levels and their values for all factors Identify control factors that may interact Select the Orthogonal Array Assign factors and interactions to the columns of OA Conduct the experiment Analyse the data Interpret the results Select the optimal levels of the significant factors Predict the expected results Run a confirmation experiment Most of these steps are already discussed in Chapter 2.3 Explain the three steps of robust design process. 2.1 What is the need for Taguchi Methods? 10. 12. 11. 8. .2 What are the advantages of Taguchi Methods over the traditional experimental designs? 10. 10. 6. 3. 9. 4.4 State the additional experimental steps involved in robust design experimentation.Taguchi Methods 201 10. 7.5 STEPS IN EXPERIMENTATION The following are steps related to the Taguchi-based experiments: 1. Other steps related to the Taguchibased experiments are discussed in Chapters 11 and 12. 13. 5. 14. 10. PROBLEMS 10. 15. 2.1 Two-level series L4 (23) L8 (27) L16 (215) L32 (231) L12 (211)* Standard orthogonal arrays Four-level series L15 (45) L64 (421) Mixed-level series L18 (21. 312) Three-level series L9 (34) L27 (313) L81 (340) * Interactions cannot be studied † Can study one interaction between the 2-level factor and one 3-level factor An example of L8(27) OA is given in Table 11.1. 37)† L36 (211. Nomenclature of arrays L = Latin square a = number of rows b = number of levels c = number of columns (factors) Degrees of freedom associated with the OA = a – 1 Some of the standard orthogonal arrays are listed in Table 11.C H A P T E R 11 L a (b c ) Design of Experiments using Orthogonal Arrays 11. a French mathematician. The Hadamard matrices are identical mathematically to the Taguchi matrices. the columns and rows are rearranged (Ross 2005). 202 . TABLE 11.1 INTRODUCTION Orthogonal Arrays (OAs) were mathematical invention recorded in early 1897 by Jacques Hadamard. (2. 4(B) 5 6(AB) 7 5 4 7 6 1 6 7 4 5 2 3 7 6 5 4 3 2 1 .2 ASSIGNMENT OF FACTORS AND INTERACTIONS When the problem is to study only the main factors.3. Any pair of columns have only four combinations (1. Interaction Tables (Appendix B) 2. 2). 2 1 2(A) 3 4 5 6 3 3 2 1 Interaction table for L8 OA Column no. Taguchi has provided the following two tools to facilitate the assignment of factors and interactions to the columns of Orthogonal Arrays. the factors can be assigned in any order to each column of the OA. Table 11. (1. 11. we have to follow certain procedure. Each column has equal number of 1s and 2s. When we have main factors and some interactions to be studied. and (2. 1 2 3 4 5 6 7 8 · · · · Standard L8 orthogonal array 2 1 1 2 2 1 1 2 2 3 1 1 2 2 2 2 1 1 Columns 4 5 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 6 1 2 2 1 1 2 2 1 7 1 2 2 1 2 1 1 2 1 1 1 1 1 2 2 2 2 The 1s and 2s in the matrix indicate the low and high level of a factor respectively.3 Column no. 2) indicating that the pair of columns are orthogonal. 1). 1). 1. One interaction table for the two-level series and another one for three-level series are given in Appendix B. There are eight experimental runs (trials) 1 to 8.3 gives part of the two-level interaction table that can be used with the L8 OA. Linear Graphs (Appendix C) The interaction table contains all possible interactions between columns (factors). Suppose factor A is assigned to Column 2 and factor B is assigned to Column 4. · This OA can be used to study upto seven factors. the interaction AB should be assigned to Column 6 as given in Table 11. TABLE 11.2 Trial no.Design of Experiments using Orthogonal Arrays 203 TABLE 11. the numbers indicate the column numbers of L8 OA. The rows will indicate the number of experiments (trials) to be conducted. we include only main effects and do not consider interaction effects. Then the interactions AB.1 Standard linear graph of L8 OA. These linear graphs facilitate the assignment of main factors and interactions to the different columns of an OA. Step 5: Compare with the standard linear graph of the chosen OA. B.1 Consider an experiment with four factors (A. A 1 5 4 6 C D 7 3 2 B FIGURE 11. 4 and 7 respectively. ILLUSTRATION 11.3 SELECTION AND APPLICATION OF ORTHOGONAL ARRAYS The following steps can be followed for designing an OA experiment (Matrix Experiment): Step 1: Determine the df required for the problem under study. C and D to columns 1. To simplify the assignment of these interactions to the columns of the OA. 11.1 Linear Graph Taguchi suggested the use of additive effects model in robust design experiments. 2. Step 6: Superimpose the required LG on the standard LG to find the location of factor columns and interaction columns. and BC should be assigned to columns 3.3. The design of this OA experiment is explained following the procedure given in Section 11. Figure 11. (b) Possible number of interactions of OA > the number of interactions to be studied. . 5 and 6 respectively. AC. These are prepared using part of the information from the interaction table. Step 4: Draw the required linear graph for the problem.204 Applied Design of Experiments and Taguchi Methods 11.1 shows one of the two standard linear graphs associated with L8 OA. The Linear Graph (LG) is a graphic representation of interaction information.1. Suppose we assign main factors A. The remaining columns (if any) are left out as vacant.2. Step 7: Draw the layout indicating the assignment of factors and interactions. Also. In Figure 11. And we conduct a confirmation experiment to determine whether the main factors model is adequate. B. the interactions AB. Sometimes a few two factor interactions may be important. (a) degrees of freedom of OA > df required for the experiment Note that the degrees of freedom of OA = number of rows in the OA minus one. In the additive model. Step 2: Note the levels of each factor and decide the type of OA. AD and BD are of interest to the experimenter. (two-level or three-level) Step 3: Select the particular OA which satisfies the following conditions. Taguchi suggested their inclusion in the orthogonal experiments. C and D) each at two levels. Taguchi developed linear graphs. 3(b).2 shows the required linear graph for Illustration 11.1.4). the best OA would be L8. (b) Number of interactions to be studied = 3 Interactions possible in L8 = 3 [Figure 11.2 Required linear graph for Illustration 11. B A D FIGURE 11. . TABLE 11.1.3(a) and 11. 1 3 3 5 1 4 6 7 (b) 5 2 6 (a) FIGURE 11. Hence.3 Types of standard linear graph of L8 OA. Step 5: The two types of standard linear graphs of L8 OA are shown in Figures 11.Design of Experiments using Orthogonal Arrays 205 Step 1: The required degree of freedom (Table 11. Step 6: Superimpose the required linear graph on the standard linear graph.3(a)] Therefore.4 Factors A B C D AB AD BD Degrees of freedom for Illustration 11. Step 4: Required linear graph Figure 11.1 Levels 2 2 2 2 Degrees of freedom 2 – 1 2 – 1 2 – 1 2 – 1 (2 – 1) (2 – 1) (2 – 1) (2 – 1) (2 – 1) (2 – 1) = = = = = = = 1 1 1 1 1 1 1 Total df = 7 Step 2: Levels of factors All factors are to be studied at two-levels. choose a two-level OA Step 3: Selection of required OA (a) The OA which satisfies the required degrees of freedom is L8 OA. the levels are set only to the main factors and hence interacting columns can be omitted in the test sheet. 1 2 3 4 5 6 7 8 B 1 1 1 1 1 2 2 2 2 A 2 1 1 2 2 1 1 2 2 AB 3 1 1 2 2 2 2 1 1 The design layout for Illustration 11. During experimentation. .4) is shown in the design layout (Table 11.1 is shown in Figure 11.1.6).1 Factors D 4 1 2 1 2 1 2 1 2 BD 5 1 2 1 2 2 1 2 1 AD 6 1 2 2 1 1 2 2 1 C 7 1 2 2 1 2 1 1 2 * * * * * * * * * * * * * * * * Response Y Test sheet For conducting the experiment test sheet may be prepared without the interacting columns (Table 11.5 Trial no. TABLE 11. the design is called a saturated design. The superimposed linear graph for Illustration 11.3(a) is similar to the required linear graph. Step 7: The assignment of the factors and interactions to the columns of OA as per the linear graph (Figure 11. Suppose we have only five factors/ interactions to be studied.4 Superimposed linear graph for Illustration 11.5). we can use L8 OA. However. Since all the columns of OA is assigned with the factors and interactions. two of the seven columns remain unassigned. For this also we can use L8 OA.206 Applied Design of Experiments and Taguchi Methods The standard linear graph shown in Figure 11. However. L4 can be used to study up to three factors.4. 1B AB 3 5 BD 7 2A 6 AD 4D C FIGURE 11. interacting columns are required for data analysis. If we want to study more than three factors but up to seven factors. These vacant columns are used to estimate error. 2 Levels 2 J100 SB35 12 mm 75° 150 A Yes Weaving Grinding In addition to the main effects. Test sheet for Illustration 11. and GH.Design of Experiments using Orthogonal Arrays 207 TABLE 11. AC.2 1 1 1 1 2 2 2 2 C 7 1 2 2 1 2 1 1 2 An arc welding process is to be studied to determine the best levels for the process parameters in order to maximize the mechanical strength of the welded joint. Design of the experiment: All factors are to be studied at two-levels. a two-level series OA shall be selected. AG.7. The design of OA experiment for studying this problem is explained below. TABLE 11. the experimenter wanted to study the two-factor interactions.1 experiment Factors A D 2 4 1 1 2 2 1 1 2 2 1 2 1 2 1 2 1 2 Response (Y) R1 R2 * * * * * * * * * * * * * * * * B 1 1 2 3 4 5 6 7 8 R–Replication ILLUSTRATION 11.6 Trial no. Degrees of freedom required for the problem are Main factors = 8 (each factor has one degree of freedom) Interactions = 4 Total df = 12 . AH. So. The factors and their levels for study have been identified and are given in Table 11.7 Factors 1 Type of welding rod (A) Weld material (B) Thickness of material (C) Angle of welded part (D) Current (E) Preheating (F) Welding method (G) Cleaning method (H) B10 SS41 8 mm 70° 130 A No Single Wire brush Factors and their levels for Illustration 11. 8. the triangular table is used. we have to use L16(215) OA. From the standard linear graph (Figure 11.7).7 Superimposed linear graph for Illustration 11.2.2.208 Applied Design of Experiments and Taguchi Methods To accommodate 12 df. The required linear graph is closer to part of the standard linear graph as shown in Figure 11. 15 9 11 11 10 8 14 A C 12 13 3 5 G H 2 6 4 FIGURE 11. we use the triangular table (Appendix B) to complete the assignment. FIGURE 11. along with the partly matched linear graph. The three unassigned columns marked with e are used to estimate the error. the part which is similar to the required linear graph is selected and superimposed (Figure 11.2 is shown in Figure 11. The required linear graph for Illustration 11. A1 10C 11 AG3 G2 5AH 4H 6GH FIGURE 11.6 Standard linear graph of L16 OA. Sometimes. Additional information on Orthogonal Arrays and linear graphs is available in Phadke (2008). The assignment of factors and interactions to the columns of L16(215) OA is given in Table 11. the required linear graph may not match with the standard linear graph or may partly match. . In such cases.5.6).6. Standard linear graphs are not given for all the Orthogonal Arrays.5 Required linear graph for Illustration 11. Linear graphs facilitate easy and quick assignment of factors and interactions to the columns of Orthogonal Arrays. When standard linear graph is not available. The objective is to maximize the hardness. BD. type of welding (D). D. Design an OA experiment. Design an OA experiment. 11. BC. AD. 11. C. he wants to study interactions AB. 11.2 An industrial engineer wants to study a manual welding process with factors each at two levels. C. Design an OA experiment.3 An experimenter wants to study the effect of five main factors A. CE and DE.1 An engineer wants to study the effect of the control factors A. affecting the hardness of a metal component. including the interactions AB. In addition.6 A heat treatment process was suspected to be the cause for higher rejection of gear wheels. AD. BD. D and E. BC. The factors are welding current (A).8 Trial no. B. B.9. welding time (B).2 4 B 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 5 D 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 6 E 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 7 F 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 8 H 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 9 AH 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 10 GH 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 11 e 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 12 C 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 13 AC 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 14 e 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 15 e 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 PROBLEMS 11. . The factors considered for the experiment and their levels are given in Table 11. and BE. D and E including the interactions AB and AC affecting the hardness of a metal component. and E each at two-level and two-factor interactions AC. Design an OA experiment. C.4 An experimenter wants to study the effect of five main factors A. D and E each at two-level and two-factor interactions AB. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 A 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 G 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 3 AG 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 Assignment of factors and interactions for Illustration 11. B. AE. BC. 11.5 An engineer performed an experiment on the control factors A. CD and AC. B. The objective is to maximize the hardness. C. 11. BD.Design of Experiments using Orthogonal Arrays 209 TABLE 11. Hence. and operator (E). it was decided to study the process by Taguchi method. Design an OA experiment. ED and CE. AE. thickness of plates (C). BE. BE. 450°C 220°C Hardening temperature (A) First quenching time (B) Second quenching time (C) Quenching media (D) Quenching method (E) Preheating (F) Tempering temperature (G) 830°C 50 s 15 s Fresh water Top up No 190°C In addition. AF.6 Levels 1 2 860°C 60 s 20 s Salt water Top down Yes. BD. . CD and CE are required to be studied.210 Applied Design of Experiments and Taguchi Methods TABLE 11. AC. Design a Taguchi experiment for conducting this study. the interaction effects AB.9 Factors Factors and their levels for Problem 11. If error sum of squares is large compared to the control factors in the experiment. Analysis of variance (ANOVA) has already been discussed in the earlier chapters. ILLUSTRATION 12. this method may be sufficient. and G) each at two levels using L8 OA. B. The data can be either attribute or variable. In this chapter how various types of data are analysed is discussed. This method requires no statistical knowledge. The response obtained from all the trials of an experiment is termed as data. E. The response graph method is very easy to understand and apply. The analysis of data from these types of experiments is discussed in the following illustration. F. D.2 VARIABLE DATA WITH MAIN FACTORS ONLY The design of OA experiments has been discussed in Chapter 11. The data analysis from this experiment is discussed as follows: Data analysis using response graph method The following steps discuss data analysis using response graph method: Step 1: Develop response totals table for factor effects (Table 12. ANOVA together with percent contribution indicate that the selection of optimum condition may not be useful. Suppose in an OA experiment we have only the main factors and the response is a variable. The results (response) from two replications are given in Table 12. We know that variable data is obtained by measuring the response with an appropriate measurement system. Also for statistically validating the results. ANOVA is required. 211 .1.2).1 INTRODUCTION Data collected from Taguchi/Orthogonal Array (OA) experiments can be analysed using response graph method or Analysis of Variance (ANOVA). For practical/industrial applications. By conducting the experiment we obtain the response (output) from each experiment. C. 12. This method accounts the variation from all sources including error term.C H A P T E R 12 Data Analysis from Taguchi Experiments 12.1 An experiment was conducted to study the effect of seven main factors (A. 13 0.212 Applied Design of Experiments and Taguchi Methods TABLE 12.88 2 F 8.00 5. if any are arbitrarily broken. Step 2: Construct average response table and rank the level difference of each factor.75 5. 4.63 1.13 4.1 B 50 49 C 53 46 D 54 45 E 61 38 F 65 34 G 46 53 This table is developed by adding the response values corresponding to each level (Level 1 and Level 2) of each factor. Level 1 totals of factor A is sum of the observations from trials (runs) 1.1 F 6 1 2 2 1 1 2 2 1 G 7 1 2 2 1 2 1 1 2 Results R1 R2 11 4 4 4 9 4 1 10 11 4 10 8 4 3 4 8 Factors/Columns C D E 3 4 5 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 TABLE 12. Each total (Table 12. Ties.12 4 E 7.75 6.3 Factors Level 1 Level 2 Difference Rank Average response and ranking of factor effects A 7.25 6. the next highest difference as rank 2 and so on.63 0.3. That is.38 1.75 0. These differences are ranked starting with the highest difference as rank 1. A1 = (11 + 11) + (4 + 4) + (4 + 10) + (4 + 8) = 56 Similarly. 3 and 4 (Table 12. This difference represents the effect of the factor. the Level 2 totals of factor B is the sum of response values from trials 3.1). 2.12 7 C 6.62 3 B 6.25 3.75 2.1 Trial no. 7 and 8.2 Factors Level 1 Level 2 A 56 43 Response totals for Illustration 12. TABLE 12. The response totals are converted into average response and given in Table 12. That is.88 5 D 6. The absolute difference in the average response of the two levels of each factor is also recorded.63 4. B2 = (4 + 10) + (4 + 8) + (1 + 4) + (10 + 8) = 49 Thus there are 8 observations in each total.88 1 G 5. For example. A 1 1 2 3 4 5 6 7 8 1 1 1 1 2 2 2 2 B 2 1 1 2 2 1 1 2 2 Data for Illustration 12.2) is divided by 8 to calculate average.88 6 .63 5. Step 4: Predict the optimum condition.1.Data Analysis from Taguchi Experiments 213 From Table 12.1 Response graph for Illustration 12. 8 Average response 7 6 5 4 A1 A2 B1 B2 C1 C2 D1 D2 E1 E2 F1 F2 G1 G2 Factor levels FIGURE 12. These are selected in the order of their ranking starting from rank 1. Thus. The predicted optimum response mpred is given by N pred = Y + (F1  Y ) + (E1  Y ) + (A1  Y ) = (F1 + E1 + A1 )  2Y (12. The response graph with the average values is shown in Figure 12. For maximization of response. And the grand average/overall mean.1.19 Step 3: Draw the response graph. A1 = average response at Level 1 of these factors. rank 2 and rank 3) as significant.1.1) where. the optimum condition is selected based on higher mean value of each factor. In this example. E1 . it is suggested to take the number of significant effects (factors) equal to about one-half the number of degrees of freedom of the OA used for the experiment. Accordingly from Figure 12. Y = overall mean response and F1 . that is whether minimization of the response or maximization. Suppose in this experiment the objective is maximization of response. the optimum condition is selected. the optimum condition is F1 E1 A1. Y = Grand total of all observations Total number of observations = 99 16 = 6. Based on the objective of the experiment.3. As a rule of thumb. it is observed that factor F has the largest effect (rank 1). we consider 3 effects (rank 1. all factors will not contribute significantly. the optimum condition is given by A1 B 1 C 1 D1 E1 F1 G 2 Generally. . 13 – 6.3) where.19) = (8.56 = 3. SS A = 2 A1 nA1 + 2 A2 nA2  CF (12.38 = 10.19 + (8.38 Data analysis using analysis of variance First the correction factor (CF) is computed.1).13 + 7. SSTotal =  Yi i 1 N 2  CF (12.214 Applied Design of Experiments and Taguchi Methods mpred = 6.44 The factor (effect) sum of squares is computed using the level totals (Table 12.19) + (7.07  8 8    .56 SSTotal is computed using the individual observations (response) data as already discussed.63 + 7.56 = 10.76 – 12.00) – 2(6.56 = 160.19) + (7. the sum of squares of all effects are computed  (50)2 (49)2  SSB =  +   612.2) = (11)2 + (11)2 + (4)2 + … + (10)2 + (8)2 – 612.07  8 8     (53)2 (46)2  SSC =  +   612.00 – 6. CF = where.56 = 0. A1 = nA1 = A2 = nA2 = level 1 total of factor A number of observations used in A1 level 2 total of factor A number of observations used in A2  (56)2 (43)2  SS A =  +   612.19) = 22. T = grand total and N = total number of observations CF = (99)2 16 T2 N = 612.63 – 6.57  8 8    Similarly. 44 ANOVA (initial) for Illustration 12.07 + 3. SSTotal = SScolumns Thus.62 37.Data Analysis from Taguchi Experiments 215  (54)2 (45)2  SSD =  +   612.86 0. if we have some unassigned columns.07 33. SSe = SSTotal – (SSA + SSB + SSC + SSD + SSE + SSF + SSG ) = 160.89 5. That is.1 Mean square 10.01 0.07) = 45.54 0.44 – (10.07 3.57 0.56 = 60.56 = 5.57 + 0.68 F0 1. computing the sum of squares of that particular column.33 C(%) 6.4 Source of variation A B C D E F G Error (pure) Total Sum of squares 10.07 45.07  8 8     (61)2 (38)2  SSE =  +   612.4) Degrees of freedom 1 1 1 1 1 1 1 8 15 .07 5.07  8 8     (46)2 (53)2  SSG =  +   612.07 3.07  8 8    The error sum of squares is calculated by subtracting the sum of all factor sums of squares from the total sum of squares. It is to be noted that when we compute the sum of squares of any assigned factor/interaction column (for example.91 Rank 3 7 5 4 2 1 6 (12.07 + 3. SSA = SS of Column 1.07 3.07 60. is called experimental error or pure error.57 0.56 = 3.07 60.07 5. SSA). The initial ANOVA with all the effects is given in Table 12.59 0. the total sum of squares is equal to the sum of squares of all columns.45 160.56 = 33.07 + 5. in fact.07 33.4.07 3.07 + 33.45 This error sum of squares is due to replication of the experiment.04 1. SSTotal = SSassigned columns + SSunassigned columns TABLE 12.91 3.58 0.5) (12. Also. we are.07  8  8    (65)2 (34)2  SSF =  +   612.44 1.07 + 60.82 10.07 5.16 20.54 28. these two are pooled together to test the next larger factor effect until some significant F is obtained. TABLE 12.05. To increase the degrees of freedom for error sum of squares.1 Degrees of freedom 1 1 1 12 15 Mean squares 10. If no significant F-exists. Discussion: The F-value from table at 5% significance level is F0.35 100. Pooling of sum of squares: There are two approaches suggested.59 20.1.07 60.e. So.99 12.57 33. pooling up to onehalf of the degrees of freedom has been suggested. This type of result may not be useful in engineering sense. most of the factors show significance.8 = 5. The final ANOVA is given in Table 12.44 ANOVA (final) for Illustration 12. Generally..44 35. If that factor is significant. The corresponding degrees of freedom are also pooled.23 6. the next largest factor is removed from the pool and the F-test is done on those two factors with the remaining pooled variance. When error sum of squares are high and/or error degrees of freedom are less.216 Applied Design of Experiments and Taguchi Methods When experiment is not replicated.07 60. Pooling down: In the pooling down approach.1). It has been recommended to use pooling up strategy. This is similar to the selection of the effects (based on rank) for determining the optimum condition in the response graph method.73 160.5 after pooling.4) we see that only factors E and F are significant.57 33. we will have effects equal to one-half of the degrees of freedom of the OA used in the experiment. In the case of saturated design this is equivalent to have one-half of the effects in the ANOVA after pooling.70 C(%) 6. Even when all columns are assigned and experiment not replicated. i. the pooled SSe should not be more than 50% of SSTOTAL for half the degrees of freedom of OA used in the experiment. These two errors can be combined together to get more degrees of freedom and F-tested the effects. we will have both experimental error (due to replication) and error from unassigned columns. pooling down and pooling up. for pooling the sum of squares of factors/interactions into the error term. we pool some of the factor/interaction sum of squares so that in the final ANOVA.00 . the smallest factor variance is F-tested using the next larger factor variance.32. we test the largest factor variance with the pooled variance of all the remaining factors.07 56. Under pooling up strategy. we can start pooling with the lowest factor/interaction variance and the next lowest and so on until the effects are equal to one-half of the degrees of freedom used in the experiment (3 or 4 in this Illustration 12. This is repeated until some insignificant F value is obtained. As a rule of thumb. Pooling up: In the pooling up strategy.07 4.62 37. we can obtain error variance by pooling the sum of squares of small factor/interaction variances. the sum of squares of unassigned columns is treated as error sum of squares. When all columns are not assigned and experiment is replicated. from ANOVA (Table 12.5 Source of variation A E F Error (pooled) Total Sum of squares 10.73 F0 2. factors E and F are significant. . 2. The optimal levels for the significant factors are selected based on the mean (average) response. Advantages of ANOVA The following are the advantages of ANOVA: 1. we get µpred = (7.75. the optimal condition is A1 E1 F1.13) – 2(6.05. Predicting the optimum response: The optimum condition is A1 E1 F1.12 = 4. For maximization of response. The confidence interval for the predicted optimum process average can be constructed. By controlling the factors with high contribution. (12. the total variation can be reduced leading to improvement of process/product performance. ANOVA together with the percent contribution will suggest that there is little to be gained by selecting optimum conditions. As a rule of thumb. 4. Note that factors E and F together contribute about 52% of total variation. For insignificant factors. levels are selected based on economic criteria even though any level can be used. Thus. it is found that the rank order based on contribution is same as the rank order obtained by the response graph method. If SSe is large compared to the controlled factors in the experiment. This information is not available from the response table. If variable data is collected from an experiment involving main factors and interactions. The per cent contribution due to error before pooling (Table 12. 3. Sum of squares of each factor is accounted. the procedure to be adopted for analyzing the data is explained in the following illustration.00 + 7.63 + 8.Data Analysis from Taguchi Experiments 217 As F0. we can conclude that for industrial application any one of these two methods can be employed.19) = 10. This result is same as that of response graph method. If it is higher (50% or more). From Table 12.4. we can say that some factors have not been considered during experimentation conditions were not well controlled or there was a large measurement error.3. if the per cent contribution due to error is about 15% or less.4) indicates the accuracy of the experiment.6) Npred = Y + (A1  Y ) + ( E1  Y ) + (F1  Y ) = A1 + E1 + F1  2Y Substituting the average response values from Table 12. The results can be statistically validated. we can assume that no important factors have been excluded from experimentation. 12.1.3 VARIABLE DATA WITH INTERACTIONS The design of an OA experiment when we have both main factors and interactions has already been discussed in Chapter 11. High error contribution also indicates that some factors have not been included in the study.38 Per cent contribution (C): The per cent contribution indicates the contribution of each factor/ interaction to the total variation. 25 5 Factors Level 1 Level 2 Difference Rank The response graph is shown in Figure 12. TABLE 12.42 6.08 2 AD 6.42 5.25 1 C 5.2 A 6. we have B.92 3 CD 6. Note that there are 12 observations in each level total. Since the interaction effect is significant.08 7 D 5.58 4 B 7. Optimal levels: We select the significant effects equal to one-half of degrees of freedom of the OA used in the experiment. we select 3 or 4 as significant effects. Accordingly.8 Average response and ranking of factor effects for Illustration 12.50 0.25 4.2 B 85 46 C 65 66 D 69 62 E 53 78 AD 77 54 CD 73 58 The grand average ( Y ) = 131/24 = 5.6) is given in Table 12.58 6 E 4. TABLE 12.46. E.42 4.50 2.83 3.75 5. to find the optimum condition the interaction should be broken . 3 and 4) as significant. D 1 1 2 3 4 5 6 7 8 1 1 1 1 2 2 2 2 C 2 1 1 2 2 1 1 2 2 Experimental data with interactions (Illustration 12.6 Trial no. 2.67 1.08 3.2. Table 12.2 Suppose we have the data from L8 OA experiment with interactions as given in Table 12.7 Factors Level 1 Level 2 A 75 56 Response totals for Illustration 12. Considering 4 effects as significant.7.17 0. AD and A (rank 1.83 1.08 4.6.8 gives the average response and the rank order of the absolute difference in average response between the two levels of each factor.50 1.218 Applied Design of Experiments and Taguchi Methods ILLUSTRATION 12. TABLE 12.2) Factors/Columns CD A AD 3 4 5 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 Response R2 4 4 1 0 8 1 4 4 B 6 1 2 2 1 1 2 2 1 E 7 1 2 2 1 2 1 1 2 R1 11 4 4 4 9 4 1 14 R3 11 4 14 8 4 1 4 8 Data analysis using response graph method The response totals computed from L8 OA (Table 12. 9 AD Interaction breakdown response totals D1 A1 A2 45 30 D2 24 32 The predicted optimum response (mpred) is given by N pred = Y + (A1  Y ) + (B1  Y ) + (E2  Y ) + [( A1 D1  Y )  (A1  Y )  (D1  Y )] (12.50 + 7. B1.1).1). the optimum condition is B 1. A2 D1. A 1 D1 . A1 And the optimal levels for the factors is A1. and (2.04 24 (75)2 (56)2 + – CF = 15.04 12 12 SSA = .9. for maximization.78 – 5.87 ( A1 D1 = 45/6) Data analysis using analysis of variance The sum of squares is computed using the response totals in Table 12. Note that each total is a sum of 6 observations. Hence.7) = B1 + E2 + A1 D1  D1  Y = 7.08 + 6. D1 and E2 TABLE 12. A1 D2. and A2 D2. corresponding to the pairs of combinations that occur in a two level OA: (1.2).7. E 2. These totals are computed from the response data in L8 OA (Table 12.6). For maximization of response the optimum interaction component is A1 D1 (Table 12. The breakdown totals for the interaction AD is given in Table 12. Correction factor (CF) = (131)2 = 715. (1.2 Response graph for Illustration 12.2.Data Analysis from Taguchi Experiments 219 Average response FIGURE 12. down into A1 D1.9).2) respectively. (2.46 = 9.50 – 5. 00 0. factor B shows significance. the sum of squares of all main effects/factors is obtained.38 234.78 1.04 26.33 0.220 Applied Design of Experiments and Taguchi Methods Similarly.38. the interaction sum of squares is also computed using the level totals of the interaction column.04.49.03 4. we can pool SSC.04 63.04 63.05.11) equal to one-half of the number of degrees of freedom of the experiment.91%).04 9. However. SSB = 63. the rank order based on contribution is same as that obtained earlier (response graph method).52 62. we can also study a few important interactions along with the main factors in these designs. SSD = 2. it is equivalent to the sum of squares of that column to which that effect is assigned.04 17.04.91 100.04 and SSE = 26. when we determine the sum of squares of any effect.00 5.97 ANOVA (initial) for Illustration 12.04 2. none of the effects are significant.04 22. SSTotal = (11)2 + (4)2 + (11)2 + (4)2 + … + (4)2 + (8)2 – CF = 371.1. .04 26.00 Rank 4 1 7 6 2 3 5 Degrees of freedom 1 1 1 1 1 1 1 16 23 Inference: Since F0. At this stage. At 5% level of significance.38 0.10. SSTotal is computed using the individual responses as usual.04 0. Therefore. When we are dealing with variable data. However.55 7.38 0. which will be accepted by all.01 371.16 = 4.97 Actually in all OA experiments. SSC = 0. The error variation is very large (contribution = 62.2 Mean square 15.38 12 12 These computations are summarized in Table 12. ANOVA is used when we want to validate the results statistically. As pointed out already.01 0. SSD and SSCD into the error term leaving four effects in the final ANOVA (Table 12.93 2. TABLE 12.14 1. The optimal levels can be obtained as in the response graph method. SSAD = SSCD = (77)2 (54)2 + – CF = 22.38 14. with pooled error variance. Using the pooling rule. suggesting that some factors would not have been included in the study.04 2.10 Source of variation A B C D E AD CD Error (pure) Total Sum of squares 15.04 9. especially in the early stage of experimentation (screening/pilot experiments). we can use either response graph/ranking method or ANOVA.64 C(%) 4.51 0. We prefer OA experiments when we are dealing with more number of factors.63 F0 1.04 22.04 12 12 (73)2 (58)2 + – CF = 9. usually only main factors are studied. 04 63.47 371.60 1 0. denoting error.38 26.46 2 0.12.56 2 0.90 2. With the result. .Data Analysis from Taguchi Experiments 221 TABLE 12. TABLE 12.52 1 0.04 22. ILLUSTRATION 12.04 12. The following illustration explains how variable data from an experiment with a single replicate and vacant column(s) is analysed.28 Total = 7. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 D 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 BE 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 3 F 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 4 G 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 5 AF 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 6 A 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 7 CE 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 Data for Illustration 12.97 12.04 22.38 2 0.02 1.3 An experiment was conducted using L16 OA and data was obtained which is given in Table 12.4 VARIABLE DATA WITH A SINGLE REPLICATE AND VACANT COLUMN We use OA designs to have less number of experimental trials leading to savings in resources and time.60 1 0.16 4.92 F0 1.56 1 0.38 Note that the vacant Column 13 is assigned with e.60 1 0.04 63.71 Sum of squares 15.42 2 0.12 Trial no.22 1 0. there may be one or more columns left vacant in the OA.30 2 0. Some times there may be a constraint on resources prohibiting the replication of the experiment.58 2 0.3 8 E 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 9 AC 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 10 B 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 11 BD 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 12 AB 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 13 e 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 14 CD 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 15 C Response 1 0.50 2 0.04 245. Also there may be cases where we may not be able to assign with factors and/or interactions to all columns of the OA.40 1 0.11 Source of variation A B E AD Error (pooled) Total ANOVA (final) for Illustration 12.38 26.40 2 0.2 Degrees of freedom 1 1 1 1 19 23 Mean square 15. 03063.15 are significant.76 3.60 3. .14)2 (7.4040 SSA = 2 A1 A2 + 2  CF nA1 nA2 (12.00203.58 3.8 3. This is given in the final ANOVA (Table 12.96 3.78 3. BD.88 3. SSCD = 0. From the vacant column (column 13).00023. TABLE 12.36 3.8) = (3. For each significant interaction effect.4547 – 3.15). SSF = 0. SSCE = 0.42 4.00122.222 Applied Design of Experiments and Taguchi Methods Data analysis: Note that we have only one replication and there is one vacant column in Table 12.24 3.02 3. SSAB = 0. Now. we compute error sum of squares.44 3. all the effects in Table 12. SSC = 0.14.14 3.66 3. the average response for the four combinations is given in Table 12. At 5% significance level.00303.62 3.80 3.05063. D.70 4.50 3.34 Computation of sum of squares Correction factor (CF) = (7.17.00303. F and G and interaction effects AB.00203. AF and CE are significant at 5% significance level.00903.94 3.4040 = 0.94 3.58 3. C.3 Interactions and vacant column AB AC AF BD BE CD CE Error (e1) Response total Level 1 Level 2 3.13. SSBE = 0.01563.05063 Similarly SSB = 0. Let us use ANOVA method for Illustration 12.24 3.01822.78 4.3. SSBD = 0.01563 and the vacant column sum of squares SSe1 = 0. The average response for all the main effects is given in Table 12. SSAC = 0. SSE = 0.12.00003. The response totals for all the factors and interactions and the vacant column are given in Table 12. BE and CD) are pooled and added with SSe1 to obtain the pooled error which will be used to test the other effects. SSD = 0. AC.38)2/16 = 3.44 3. E.04 4.14 3.68 3.60 3. the effects whose contribution is less (B.13 Main factors A B C D E F G Response totals for the Illustration 12.02723. These computations are summarized in Table 12. SSAF = 0. the main effects A.16. SSG = 0.38)2 +  8 8 16 = 3.72 Response total Level 1 Level 2 3. Determination of optimal levels: It can be seen from the ANOVA (Table 12.15).24)2 (4. Data Analysis from Taguchi Experiments 223 TABLE 12.82 22.06 C(%) 22.05063 0.00303 0.00003 0.00023 C(%) 22.02723 0.05063 0.15 Source of variation A D F G AB AF CE Pooled error Total Sum of squares 0.417 .88 0.03063 0.02723 0.01 22.99 100.95 13.36 6.32 7.01563 0.82 0.00203 0.517 D 0.03063 0.00023 0.02723 0.01563 0.05063 0.420 F 0.10 100.62 6.00203 0.02723 0.16 Factor main effect Level 1: Level 2: ANOVA (final) for the Illustration 12.01822 0.05063 0.01563 0.05063 0.09 6.00903 0.88 7.00303 0.495 G 0.427 0.06 19.09 3.55 7.09 1.00122 0.01822 0.94 0.05063 0.36 6.01822 0.01822 0.3 Sum of squares 0.87 6.02063 0.01563 0.00122 0.88 11.01563 0.88 1.01563 0.00 TABLE 12.00303 0.05063 0.06 11.95 13.502 0.82 0.22923 TABLE 12.00903 0.82 8.00 Average response of significant main effects A 0.505 0.14 Source of variation A B C D E F G AB AC AF BD BE CD CE Error (SSe1) Total ANOVA (initial) for the Illustratrion 12.405 0.3 Degrees of freedom 1 1 1 1 1 1 1 8 15 Mean square 0.62 10.03063 0.54 6.32 0.00303 0.01563 0.00203 0.00203 0.00258 F0 19.03063 0.22923 Degrees of freedom 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 15 Mean square 0.00003 0.05063 0.01563 0.09 11. the optimal levels for A. Usually.495 0.500 0.1 Treating Defectives as Variable Data In this approach we consider the defectives (bad) as response from the experiment and ignore the non-defectives. There are two approaches to analyse the data through ANOVA. the objective of this experiment will be to minimize the number of .16) respectively. These 1s and 0s are considered as data and analysed.490 0.495 Average of CE C1 E 1 C1 E 2 C2 E 1 C2 E 2 0. For D and G. So. the optimal condition is A2 B2 C2 D1 F1 G1.315 0. we use both the classes of data (defectives and non-defectives) and represent mathematically these two classes as 1 and 0 respectively. So. 2. B. These two approaches are discussed using the data from L8 OA given in Table 12. We get attribute data when we classify the product into different categories such as good/bad or accepted/ rejected.18 Trial no.455 0. the response will be either defectives or defects.450 0. 12. how to analyse is discussed in this section. Accordingly. One approach is to treat the defectives (response) from each experiment as variable data and analyse.18. the combinations A2B2. This is treated as three 1s and seventeen 0s. Suppose in a sample of 20 parts. A 1 1 2 3 4 5 6 7 8 1 1 1 1 2 2 2 2 B 2 1 1 2 2 1 1 2 2 L8 OA: Attribute data F 6 1 2 2 1 1 2 2 1 G 7 1 2 2 1 2 1 1 2 Defective (bad) 5 7 2 4 3 1 6 0 Non-defective (good) 20 18 23 21 22 24 19 25 Factors/Columns C D E 3 4 5 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 12. the optimal levels are 1 and 1 (Table 12. From Table 12. 2 and 1 respectively. In the second approach.17.5 ATTRIBUTE DATA ANALYSIS Variable data is obtained through measurement of the quality characteristic (response).224 Applied Design of Experiments and Taguchi Methods TABLE 12. TABLE 12. When the response from the experiment is in the form of attribute data. C and F are 2.17 Average of AB A1 B 1 A1 B 2 A2 B 1 A2 B 2 0.535 Average response of significant interaction effects Average of AF A1F1 A1F2 A2F1 A2F2 0. A2F1 and C2E1 results in maximum response.495 0.405 Suppose the objective is to maximize the response.360 0.540 0.5. 3 are defective and 17 are non-defective (good). 00 2. SSC = 8.00.00 0.19.00.76 – ..00 42.Data Analysis from Taguchi Experiments 225 defectives.19 Factors Level 1 Level 2 A 18 10 Response totals for the attribute data B 16 12 C 18 10 D 16 12 E 8 20 F 12 16 G 16 12 Note that there are four observations in each level total.00 = 42. SSF = 2. SSE = 18.20 Source of variation A B C D E F G Error (pure) Total ANOVA (initial) for the attribute data Sum of squares 8.76 4. + (6)2 + (0)2 – CF = 140. SSB = 2.00 These computations are summarized in the initial ANOVA Table 12.00.00 = 8.86 4.00 The response totals required for computing sum of squares is given in Table 12.00 2. the sum of squares is computed and ANOVA is carried out.00 2.00 8.00 Degrees of freedom 1 1 1 1 1 1 1 – 7 C(%) 19.05 4. TABLE 12.00 2.05 4. SSG = 2. other factor sum of squares is computed.. SSD = 2.00 – 98.76 19.76 42.00 18. TABLE 12.00 N 8 SSTotal = (5)2 + 7)2 + .00.00 – 98.20.00.00 Similarly. SS A = = (18)2 (10)2 +  CF 4 4 (18)2 + (10)2  CF 4 = 106. Total number of defectives (T) = 28 Total number of observations (N) = 8 (treated as one replicate) CF = (28)2 T2 = = 98. Considering the number of defectives from each experiment as variable. 19).00 8.9). we get mpred = A2 + C2 + E1  2Y = (0.00 42. the bad parts are 5 and 20 good parts.00 8.9% to the total variation.21) that at 5% level of significance only factor E is significant and contributes 42. the optimal levels for minimization of response (defectives) is A2.2 Considering the Two-class Data as 0 and 1 Consider the same data used in Table 12.00 2.05 19.9) Overall average fraction defective ( Y ) = Total number of defectives Total number of items inspected 28 = 0. The 5 bad products are treated as five 1s and the 20 good parts are treated as twenty 0s.05 + 0.85 19. D. A2 = 10 10 8 = 0.00 F0 4. F and G into the error term.21 Source of variation A C E Error (pooled) Total Sum of squares 8. C2 = = 0.00 ANOVA (final) for the attribute data Degress of freedom 1 1 1 4 7 Mean square 8. If the data is considered as 1s and 0s.05 42. For trial 1.00 C(%) 19.5.00 4.14 or 14% 200 (12.21). Also we use one-half of the degrees of freedom for predicting the optimal value. the total sum of squares is given by .04) – 2 (0.05 100.5. Therefore.00 9.14) = – 0.14 mpred is a percent defective and it cannot be less than zero (negative).00 8. The predicted optimum response (mpred) is given by mpred = Y + (A2  Y ) + (C2  Y ) + (E1  Y ) = A2 + C2 + E1  2Y (12.226 Applied Design of Experiments and Taguchi Methods Pooling the sum of squares of B. This will happen when we deal with fractions or percentage data.05.18.3.00 18. (12.00 18.05 and E1 = = 0. TABLE 12.04 200 200 200 Substituting these values in Eq.10) (Y ) = Similarly.00 It is observed from ANOVA (Table 12. 12. their contribution seems to be considered as important.05 + 0. we have the final ANOVA (Table 12. Though factors A and C are not significant. How to deal with this is explained in Section 12. C2 and E1 (Table 12. 08 22.08.08 – (0.08.08 0. + (0)2 – CF =T  T2 N (12. SSD = 0.11) where.08 0.08 0. The final ANOVA with the pooled error is given in Table 12. These computations are summarized in Table 12. SSA = (18)2 (10)2 + – CF 100 100 = 4.32 + 0. SSF = 0. SSG = 0.08 0.Data Analysis from Taguchi Experiments 227 SSTotal = (1)2 + (1)2 + (1)2 + (1)2 + (1)2 + (0)2 + (0)2 + .08 At 5% significance level. TABLE 12.32 0.92 = 0.68 2.32 0.24 – 3.32 Similarly other sum of squares is computed. SSB = 0.08 + 0.68 0.23. T = sum of all data N = total number of parts inspected = 200 SSTotal = 28  (28)2 5600  784 4816 = = = 24.08 200 200 200 The sum of squares of columns (factors) is computed similar to that of variable data.08 + 0.68 Sum of squares 0.22 Source of variation A B C D E F G Error (pure) Total ANOVA (initial) for the two-class attribute data Degress of freedom 1 1 1 1 1 1 1 192 199 Mean squares 0.68 6. SSC = 0..72.72 0. .. only factor E is significant.32 + 0.74 0. SSE = 0.4.22.40 24.08 + 0.08 0.08.08 and SSe = SSTotal – SS of all factors = 24.15 0. Note that the number of observations in each total here is 100.08 0.32 0.08 0.32 0.08) = 22.117 F0 2.72 + 0.74 0.32.72 0. 5.228 Applied Design of Experiments and Taguchi Methods TABLE 12.32 0.375 + 15. The Omega transformation formula is: W(dB) = 10 log where.4%.609 dB Converting dB into percent defective.787 – 13.72 0.23 Source of variation A C E Error (pooled) Total Sum of squares 0. When we deal with percentage (fractional) data such as percent defective or percent yield. we obtain mpred = (–12.3 Transformation of Percentage Data The prediction of optimum response is based on additivity of individual factor average responses.766 = –23. Hence.801 5 –12. .787 – 12.5. C2 = 5% and E1 = 4% p 1  p (12. Then the mpred is obtained in terms of dB value. 12.72 24.32 0.787 14 –7.72 22. Per cent defective: dB value: 4 –13.883) = –39.00 This result is same as that obtained in Section 12. the usual interpretation of percent contribution with this type of data analysis will be misleading. Applying transformation.12) ÿ ÿ ÿ mpred = A2 + C2 + E1  2Y = (0.05 + 0.04) – 2(0. As an example.84 2. Y = 14%. the pooling error variance does not affect the results because the error degrees of freedom are already large.1. p is the fraction defective (0 < p < 1).76 2.1.32 0.08 ANOVA (final) for the two-class attribute data Degress of freedom 1 1 1 196 199 Mean square 0.801) – 2(–7. mpred is about 0.5. the data is transformed (Omega transformation) into dB values using Omega tables (Appendix D) or formula. consider the illustration discussed in Section 12. the value may approach 0% or 100% respectively. In this case.05 + 0.14 or –14% which is not meaningful.14) = – 0. Because of poor additivity of percentage data.81 100. Also.116 F0 2.21 C(%) 0.32 0. This value is again converted back to percentage using the Omega table or formula.84 0. the predicted value may be more than 100% or less than 0% (negative).883 Now substituting these dB values. A2 = 5%.76 6.51 95. B1 (Illustration 12.O 1 . The experimenter would like to have a range of values within which the true average be expected to fall with some confidence. F0.19 = 4. when we use OA designs and fractional factorial designs because of confounding within the columns of OA and the presence of aliases respectively.7. The sample size for the confirmation experiment is larger than the sample size of any one experimental trial in the original experiment. For the insignificant factors.). Confirmation is necessary. The mean value estimated from the confirmation experiment (mconf) is compared with mpred for validating the experiment/results.1. 12. 12.Data Analysis from Taguchi Experiments 229 12.2. usually levels are selected based on economics and convenience.1 Confidence Interval for a Treatment Mean CI1 = where FB .O 2 = F-value from table ÿ FB . Confirmation experiment is conducted using the optimal levels of the significant factors. NB1 = B1  CI1 B1  CI1  N B1  B1 + CI1 At a = 5%.38 . although any level can be used.O 2  MSe n (12. Poor agreement between mconf and mpred indicate that additivity is not present and there will be poor reproducibility of small scale experiments to large scale production and the experimenter should not implement the predicted optimum condition on a large scale.13) ÿ a = significance level n1 = the numerator degrees of freedom associated with mean which is always 1 n2 = degrees of freedom for pooled error mean square MSe = pooled error mean square/variance n = number of observations used to calculate the mean Suppose we want to compute confidence interval for the factor level/treatment. Generally if mconf is within ± 5% of mpred. this provides a 50% chance that the true average may be greater and 50% chance that the true average may be less than mpred.6 CONFIRMATION EXPERIMENT The purpose of confirmation experiment is to validate the conclusions drawn from the experiment. A good agreement between mconf and mpred indicate that additivity is present in the model and interaction effects can not be dominant. we can assume a good agreement between these two values.05.7 CONFIDENCE INTERVALS The predicted mean (µpred ) from the experiment as well as the mean estimated from the confirmation experiment are point estimates based on the averages of results. This is called the confidence interval. Statistically.O1 . 43) £ÿ mpred £ÿ (9.87 + 3.43 4.38  12.8 The confidence interval for the predicted mean (mpred) is given by mpred – CI2 £ÿ mpred £ÿ mpred + CI2 (9.08 – 2.3 Confidence Interval for Confirmation Experiment CI3 =  1 1 FB .44 £ mpred £ 13.30 12.15) From Illustration 12.87 [Eq.230 Applied Design of Experiments and Taguchi Methods n = 12.O1 .7.87 – 3. Therefore.17 £ mB1 £ 7.O 2  MSe  +  r  neff (12.38  12. mpred = B1 + E2 + A1 D1  D1  Y = 9.O1 . .17 4.7)] Total number of experiments = 8 ´ 3 = 24 and there are four effects in mpred.2.2 Confidence Interval for Predicted Mean The confidence interval for the predicted mean (mpred ) is given by CI2 = FB .O2  MSe neff = 4.25 4.92 = 3.17 12 12. (12.8 1 + 4   5  CI 2 = FB .91 £ mB1 £ 9.92 CI1 = The confidence interval range is 7.7.O2  MSe neff (12.08 and MSe = 12.14) where neff is effective number of observations.O1 .92 = 2. neff = Total number of experiments 1 + sum of df of effects used in N pred (12. B1 = 7.16) where r is the sample size for the confirmation experiment.  24   24  neff =   =   = 4.08 + 2.43) 6. Thus.52 – 4. we may accept that the results are additive and the experimental results are reproducible.24a).24a Factors Weld material (A) Type of welding rod (B) Thickness of weld metal (C) Weld current (D) Preheating (E) Opening of welding parts (F) Drying of welding rod (G) Data for Problem 12. the confidence interval of the predicted mean (6.35 £ mconf £ 13.38  12.17 4.1. a confirmation experiment is run 10 times and the mean value is 9. TABLE 12.8 10  1   1 4.52 + 4.Data Analysis from Taguchi Experiments 231 Suppose for Illustration 12.92  +  = 4.17 £ mconf £ 9. we may infer that the experimental results are reproducible.2.17 5.24b. If the confidence interval range of the predicted mean (mpred ± CI2 ) overlaps with the confidence interval range of the confirmation experiment (mconf ± CI3).05. For example. .2.52.35 to 13. the confidence interval range is mconf – CI3 £ mconf £ mconf + CI3 9.1 An industrial engineer has conducted an experiment on a welding process.8 10   = Therefore.19  12. The following control factors have been investigated (Table 12. PROBLEMS 12.44 to 13. Illustration 12.92  +   4.1 Level 1 SS 41 J100 8 mm 130 A No 2 mm No drying Level 2 SB 35 B17 12 mm 150 A heating by 100°C 3 mm 2 days drying The experiment was conducted using L8 OA and the data (weld strength in kN) obtained are given in Table 12.69 The confidence interval for the predicted mean (mpred) and the confidence interval for the confirmation experiment (mconf) are compared to judge whether the results are reproducible.30) overlaps fairly well with the confidence interval of the confirmation experiment (5. The confidence interval is CI3 = 1   1 F0.69). TABLE 12.1 and analyse using ANOVA and interpret the results.25 Trial no.3 Results R2 R3 19 24 35 26 25 12 6 18 17 28 34 37 30 6 7 23 Factors/Columns AB C AC 3 4 5 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 D 6 1 2 2 1 1 2 2 1 E 7 1 2 2 1 2 1 1 2 R1 20 22 39 27 23 10 8 17 (a) Analyse the data using response graph method and comment on the results.25). (b) Analyse the data using ANOVA and comment on the results. (b) What is the predicted weld strength at the optimum condition? 12.2 Consider the data given in Problem 12. A 1 1 2 3 4 5 6 7 8 1 1 1 1 2 2 2 2 B 2 1 1 2 2 1 1 2 2 Data for Problem 12. .3 The data is available from an experimental study (Table 12. Analyse the data and determine the optimal levels for the factors. The weld joints were tested for welding defects and the following data were obtained (Table 12.4 An experiment was conducted with eight factors each at three levels on the flash butt welding process.26).1 Results R1 R2 20 22 39 27 23 10 8 17 17 28 34 37 30 6 7 23 Factors/Columns C D E 3 4 5 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 F 6 1 2 2 1 1 2 2 1 G 7 1 2 2 1 2 1 1 2 (a) Determine the average response for each factor level and identify the significant effects. 12. A 1 1 2 3 4 5 6 7 8 1 1 1 1 2 2 2 2 B 2 1 1 2 2 1 1 2 2 Data for Problem 12.24b Trial no.232 Applied Design of Experiments and Taguchi Methods TABLE 12. 12. A 1 1 2 3 4 5 6 7 8 9 1 1 1 2 2 2 3 3 3 Data for Problem 12. The objective is to minimize the response.60 1.4 Response Good Bad 6 6 5 6 6 6 6 6 6 3 5 4 6 6 6 6 5 5 0 0 1 0 0 0 0 0 0 3 1 2 0 0 0 0 1 1 Factors/Columns C D E F 3 4 5 6 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 1 2 3 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 1 2 3 2 3 1 3 1 2 2 3 1 1 2 3 3 1 2 G 7 1 2 3 3 1 2 2 3 1 2 3 1 3 1 2 1 2 3 H 8 1 2 3 3 1 2 3 1 2 1 2 3 2 3 1 2 3 1 12.27 Trial no. TABLE 12.80 1.27.5 Results R1 R2 2.50 1.35 1.30 2.80 2.70 1.32 2.75 0. each at three levels. A 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 B 2 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 Data for Problem 12.85 1. The data is given in Table 12.10 Factors/Columns B C 2 3 1 2 3 1 2 3 1 2 3 1 2 3 2 3 1 3 1 2 D 4 1 2 3 3 1 2 2 3 1 .40 1. Analyse using ANOVA and identify optimal levels for the factors.30 1.26 Trial no.Data Analysis from Taguchi Experiments 233 TABLE 12.40 2.5 A study was conducted involving three factors A.60 1.00 1.50 2.30 2. B and C. 1 INTRODUCTION Robust design is an engineering methodology used to improve productivity during Research and Development (R&D) activities so that high quality products can be developed fast and at low cost. And on this assumption of equality of variances for the treatment combinations. He argued that by setting the factors with important dispersion effects at their optimal levels. In Taguchi methods. Taguchi has suggested a combined measure of both these effects. The best levels of control factors are those that maximize the S/N ratio. Taguchi was the first to suggest that statistically planned experiments should be used in the product development stage to detect factors that affect variability of the output termed dispersion effects of the factors. And robust design is a procedure used to design products and processes such that their performance is insensitive to noise factors. it is assumed that the variability in the response characteristic remains more or less the same from one trial to another. Thus. the identification of dispersion effect is important to improve the quality of a process. To overcome this problem. In this. Traditional techniques of analysis of experimental designs focus on identifying factors that affect mean response. the mean responses are compared. In order to achieve a robust product/process one has to consider both location effect and dispersion effect. These two measures are combined into a single measure represented by m2/ s 2. These are termed location effects of the factors. the output can be made robust to changes in the operating and environmental conditions during production. Suppose m is the mean effect and s 2 represent variance (dispersion effect). When this assumption is violated. Orthogonal Array experiments are used in Robust design. 234 . the word optimization means determination of best levels for the control factors that minimizes the effect of noise. The data is transformed into S/N ratio and analysed using ANOVA and then optimal levels for the factors is determined. Genich Taguchi in the 1950s. This leads to the development of a robust process/product. statisticians have suggested that the data be suitably transformed before performing ANOVA.C H A P T E R 13 Robust Design 13. We say that a product or a process is robust if its performance is not affected by the noise factors. In the technique of ANOVA. the results will be inaccurate. In terms of communications engineering terminology m2 may be termed as the power of the signal and s 2 may be termed as the power of noise and is called the SIGNAL TO NOISE RATIO (S/N ratio). Robust design was developed by Dr. generally called as Taguchi methods. we try to determine product parameters or process factor levels so as to optimize the functional characteristics of products and have minimal sensitivity to noise. TABLE 13. configuration etc. These noise factors are further classified as follows: Outer noise: Produces variation from outside the product/process (environmental factors).1 Product Customer’s usage Temperature Humidity Dust Solar radiation Shock Vibration Deterioration of parts Deterioration of material Oxidation Examples of noise factors Process Outer Noise Incoming material Temperature Humidity Dust Voltage variation Operator performance Batch to batch variation Inner Noise Machinery aging Tool wear Process shift in control Between Products Piece to piece variation Process to process variation . These factors can be classified into the following: · · · · Control factors Noise factors Signal factors Scaling factors Control factors: These factors are those whose values remain fixed once they are chosen. Interactions are not included in the design. These factors can be controlled by the manufacturer and cannot be directly changed by the customer. in robust design only main factors are studied along with the noise factors. 13.2 FACTORS AFFECTING RESPONSE A number of factors influence the performance (response) of a product or process. Examples for these noise factors related to a product and a process are listed in Table 13. Inner noise: Produces variation from inside or within the product/process (functional and time related). material.Robust Design 235 Usually. These include design parameters (specifications) of a product or process such as product dimensions.1. Product noise: Part to part variation. Noise factors: These factors are those over which the manufacturer does not have any control and they vary with the product’s usage and its environment. In many applications. the output follow input signal in a predetermined manner. Of course. Often we deal with the following four types of quality characteristics: · · · · Smaller–the better Nominal–the best Larger–the better Fraction defective Smaller–the better: Here. The objective in robust design is to minimize the sensitivity of a quality characteristic to noise factors. etc. The S/N ratio depends on the type of quality characteristic. only S/N ratios used for optimization of static problems are discussed in this chapter. In static functions (problems) where the output (response) is a fixed target value (maximum strength. 1 n    i 1 n  Yi2    (13. The signal to noise ratio (S/N ratio) is a statistic that combines the mean and variance.3 OBJECTIVE FUNCTIONS IN ROBUST DESIGN Taguchi extended the audio concept of the signal to noise ratio to multi-factor experiments. 13. we need to evaluate the control and signal factors in the presence of noise factors. in setting parameter levels we always maximize the S/N ratio irrespective of the type response (i. Where as dynamic problems are associated with technology development situations.e. the signal factor also varies. It can take any value between 0 – ¥. That is. maximization or minimization). accelerator peddles in cars ( application of pressure). Here.. Hence. This is achieved by selecting the factor levels corresponding to the maximum S/N ratio. the signal factor takes a constant value. Scaling factors: These factors are used to shift the mean level of a quality characteristic to achieve the required functional relationship between the signal factor and the quality characteristic (adjusting the mean to the target value).236 Applied Design of Experiments and Taguchi Methods Signal factors: These factors change the magnitude of the response variable. etc. the optimization problem involves the determination of the best levels for the control factors so that the output is at the target value.1) . this factor is also called adjustment factor. In dynamic problems.). volume control in audio amplifiers (angle of turn of knob). the quality characteristic is continuous and non-negative. The S/N ratio (h) is given by h = –10 log  where n is the number of replications. In these problems. tyre wear. minimum wear or a nominal value). Hence. These problems are characterized by the absence of scaling factor (ex: surface roughness. For example. pollution. Types of S/N ratios The static problems associated with batch process optimization are common in the industry. where the quality characteristic (response) will have a variable target value. the aim is to minimize the variation in output even though noise is present. These S/N ratios are often called objective functions in robust design. The desired value (the target) is zero. which minimizes noise. fuel efficiency. The ideal target value of this type quality characteristic is ¥ (as large as possible).4) where p is the fraction defective. The S/N ratio (h) is given by 1 I =  10 log  n    i 1 n 1   Yi2   (13. etc. Sometimes we consider a factor that has a small effect on h. Adjustment factor can be identified after experimentation through ANOVA or from engineering knowledge/experience of the problem concerned. are examples of this type.0 and that of – ¥ to + ¥. In these problems.Robust Design 237 Nominal–the best: In these problems. It can take any value from 0 to ¥. there is no scaling factor. The objective function here is 1  h = –10 log    p   1 or p  hÿ = 10 log    1  p  (13. the best value of p is zero in the case of fraction defective or 1. Larger–the better: The quality characteristic is continuous and non-negative. The adjustment factor can be one of those factors which affect mean only (significant in raw data ANOVA) and no effect on h. We can use adjustment factor to move mean to target in these types of problems. It is called Omega transformation. we identify the adjustment factor and adjust its mean to the target value.2)  i 1 n Yi and n S2 =  i 1 n (Yi  Y ) = n 2  Yi2  nY 2 i 1 n n The optimization is a two step process. First we select control factor levels corresponding to maximum h. Its target value is non-zero and finite.0 if it is percentage of yield. It can take any value from 0 to ¥. h is . In the second step.0 (0 to 100%).3) Fraction defective: When the quality characteristic is a proportion (p) which can take a value from 0 to 1. Y2 The S/N ratio (h) = 10 log   S2  where Y=     (13. The range of values for p is 0 to 1. the quality characteristic is continuous and non-negative. Quality characteristics like strength values. 13.2 gives this design. Apart from the main effects.3. The example below explains this. An experiment was conducted with an objective of increasing the hardness of a die cast engine component.5 SIMPLE PARAMETER DESIGN The simplest form of parameter design (Robust design) is one in which the noise factors are treated like control factors and data are collected. Table 13. That is. The control factors are assigned to the columns of OA and each experimental trial is repeated for each level of noise factor and data are collected. All the factors are to be studied at two levels. The problem is concerned with a heat treatment process of a manufactured part. the S/N ratio is computed using the appropriate objective function. Simple parameter design with two noise factors This design is explained with a problem. one observation is obtained for each trial. After the data is collected. Simple parameter design with one noise factor The design is illustrated with an experiment. At each position. They have selected the second alternative and conducted an experiment and determined the optimal levels for the process parameters and implemented. the interactions AB and BC are also suspected to have an influence on the response (hardness). The following control factors and noise factors have been identified for investigation. (ii) Conduct a robust design experiment. This type of design is often used when we have only one or two noise factors. position is the noise factor. When investigated. for each trial. Experts suggested two alternatives to eliminate/ reduce the rejection percentage. the cause is not removed but quality has improved due to reduced variation. Also uniform surface hardness (consistency of hardness from position to position) was desired. Since it is difficult to control the transfer time and quenching oil temperature during production. make the product/process robust against the noise factors. · Improve quality (reduce variation) without removing the cause of variation. In this design. The rejections were almost zero. these two are considered as noise factors. (i) Redesign the kiln to remove the cause. So. The data analysis can be carried out after computing S/N ratio for each trial.238 Applied Design of Experiments and Taguchi Methods 13.4 ADVANTAGES OF ROBUST DESIGN The advantages of robust design are as follows: · Dampen the effect of noise (reduce variance) on the performance of a product or process by choosing the proper levels for the control factors. The required design to study this problem is given in Table 13. This was the cause for rejection. it was found that the tiles in the mid zone of the kiln used for firing did not get enough and uniform heat (temperature). . This requires heavy expenditure. A tile manufacturing company had a high percentage of rejection. This S/N ratio is treated as response and analysed using either response graph method or ANOVA as discussed in Chapter 12. A total of seven factors each at two levels were studied using L8 OA. Robust Design 239 TABLE 13. Quenching duration (C).3 Trial no. The following control factors each at two levels were studied. Factors Weld design (A) Cleaning method (B) Preheating temperature (C) Post weld heat treatment (D) Welding current (E) Level 1 Existing Wire brush 100°C Done 40 A Level 2 Modified Grinding 150°C Not done 50 A . Quenching method (D). Noise factors: Transfer time (T). A 1 1 1 1 1 2 2 2 2 TABLE 13. Quenching media (E).1 Simple Parameter Design with One Noise Factor A study was conducted on a manual arc welding process. In addition to the control factors the interactions AD and CD were also considered. The objective was to maximize the welding strength. ILLUSTRATION 13.2 Trial no. A 1 1 1 1 1 2 2 2 2 Example of a simple parameter design with one noise factor B 2 1 1 2 2 1 1 2 2 C 3 1 1 2 2 2 2 1 1 D 4 1 2 1 2 1 2 1 2 E 5 1 2 1 2 2 1 2 1 F 6 1 2 2 1 1 2 2 1 G 7 1 2 2 1 2 1 1 2 P1 Y1 * * * * * * * * Position P2 P3 Y2 Y3 * * * * * * * * * * * * * * * * P4 Y4 * * * * * * * * 1 2 3 4 5 6 7 8 Example of a simple parameter design with two noise factors B 2 1 1 2 2 1 1 2 2 AB 3 1 1 2 2 2 2 1 1 C 4 1 2 1 2 1 2 1 2 D 5 1 2 1 2 2 1 2 1 BC 6 1 2 2 1 1 2 2 1 E 7 1 2 2 1 2 1 1 2 Response (hardness) T1 T2 t1 t2 t1 t2 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 2 3 4 5 6 7 8 Control factors: Heating temperature (A). Heating time (B). Quenching oil temperature (t ). That is.240 Applied Design of Experiments and Taguchi Methods Each trial was repeated by three operators (O). the noise factor has three levels.6 and 13. operator was treated as noise factor. A simple parameter design with one noise factor was used and data was collected on the weld strength in kilo Newton (kN). Since variability due to operators affect weld quality. The operator-wise total response for the interactions AD and CD are given in Tables 13.4 Trial no. The design with data is given in Table 13.7 respectively. TABLE 13.5 Main effect A B C D E Level 1 1 2 1 2 1 2 1 2 1 2 105 106 118 93 108 103 103 108 100 111 Operator-wise response totals for the main effects Operator (O) 2 97 89 96 90 97 89 89 97 89 97 Total 3 113 101 111 103 100 114 117 97 104 110 315 296 325 286 305 306 309 302 293 318 . TABLE 13. D 1 1 1 1 1 2 2 2 2 C 2 1 1 2 2 1 1 2 2 CD 3 1 1 2 2 2 2 1 1 Parameter design with one noise factor A 4 1 2 1 2 1 2 1 2 AD 5 1 2 1 2 2 1 2 1 B 6 1 2 2 1 1 2 2 1 E 7 1 2 2 1 2 1 1 2 Total 1 Y1 Operator (O) 2 3 Y2 Y3 24 24 21 20 28 21 24 24 186 31 24 34 28 24 21 24 28 214 1 2 3 4 5 6 7 8 31 24 24 24 29 24 21 34 211 Data analysis using ANOVA The operator-wise total response for all the main factor effects is given in Table 13.4.5. SSC = 0.08 8 8 8 The interaction sum of squares between the factor effects and the noise factor are computed using factor-operator totals.04 – 59.555.04 = 371. The factor interaction sum of squares (AD) are obtained using AD1 and AD2 totals obtained from AD column.00 – 15555.04.04 – 15.6 Operator-wise total response for the interaction AD A1 O2 45 52 Total O3 65 48 165 150 O1 48 58 A2 O2 44 45 Total O3 52 49 144 152 O1 D1 D2 55 50 TABLE 13. SSAO = (105)2 + (97)2 + (113)2 + (106)2 + (89)2 + (101)2 – CF – SSA – SSO 4 = 15640.96 SSA = (315)2 (296)2 +  CF = 15..46 Correction factor (CF) = (611)2 /24 = 15. + (28)2 – CF = 15927.38 12 Sum of squares due to operator is computed using the three operator totals.04.04. SSE = 26.7 Operator-wise total response for the interaction CD C1 O2 48 49 Total O3 55 45 158 147 O1 48 55 C2 O2 41 48 Total O3 62 52 151 155 O1 D1 D2 55 53 Computation of sum of squares: Grand total = 611 Total number of observations = 24 Grand mean ( Y ) = 611/24 = 25.04 Total sum of squares (SSTotal) = (31)2 + (24)2 + (31)2 + (24)2 + . we obtain SSB = 63. SSD = 2.09 .Robust Design 241 TABLE 13.67.04 12 and (313)2  (298)2  CF = 9. SSCD = SSO = (211)2 (186)2 (214)2 + +  CF = 59.25 – 15555. SSAD = (317)2  (294)2  CF = 22.08 = 11.04 12 12 Similarly.. .99** 6. And the interaction effects CO. .00 5.04 2.04 – 15.12.1 (Welding strength study) Degrees of freedom 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 9 23 Mean square 15.04 9. it can be seen that the difference between operators is significant. SSCDO = 5. SSADO = (55)2 + (50)2 + (45)2 + .33 9. The average response of all the significant effects is given in Table 13.04 0.61 19.06** — 5.7).09 27.04 – 59.50 – 15555.55 7.04 2.1.00 *Pooled into error term F0.00 0. we will select the optimal factor levels for A and D by considering the average response of AD combinations (Table 13.62 3.93 2. Since the operator effect is significant. TABLE 13.04 26. SSCO = 35.8.8 Source of variation A B C* D* E AD CD* O AO* BO CO DO EO* ADO CDO* Pooled error Total Sum of squares 15.2.242 Applied Design of Experiments and Taguchi Methods Similarly.9 = 5.59 35. DO and ADO are significant even though the main effects A.54 5.04 22.09.37 59. The optimal levels for B. **Significant at 5% level.09 – 59.89 2.54 2.76** 2.8.26 F0 4. SSDO = 59.04 17.09 = 35.96 ANOVA for Illustration 13.04 – 2. C and E are selected based on the average response of these factors.63 17.09 1. From Table 13. SSBO = 27.44** — — 7.26.89 0.09 11.54 0.04 26.04 63.04 9.25.38 371.38** — 7.25 29.04 22..79 29.52 15. + (52)2 + (49)2 – CF – SSA – SSD – SSO – SSAD – SSAO – SSDO 2 = 15758. C and D are not significant.59 The interaction between the factor interaction (AD and CD) and noise factor (O) is computed using the interaction-operator totals (Tables 13.18 5.41 100.05.46** 9.04 63.08 Similarly.79 17.9.06** — 4.90 C(%) 4.37 29.26. These computations are summarized in Table 13.26 35.56 15. SSEO = 1.54 13.08 – 22.05.37 0.59 59.08 5.43 9.6).9 = 4.6 and 13. F0.59.43 1.37 0.98 7.87 9.04 – 11. h = –10 log    1  1 2 + 1 2 + 1 2 TABLE 13. Substituting the raw data of trial 1.5 25. We use the ANOVA method so that the result can be compared with the one obtained by the analysis of raw data.90 27.54 26.3 Average response of the significant effects Effect average response C1 C2 D1 D2 25.5) is almost same. the level for factor C can be 1 or 2. Therefore.60 27.4 and 25.2 Effect average response B1 B2 E1 E2 27.1 23.4 26.11. Analysis using S/N data The raw data from each trial can be transformed into S/N ratio. D 1 1 1 1 1 2 2 2 2 C 2 1 1 2 2 1 1 2 2 CD 3 1 1 2 2 2 2 1 1 A 4 1 2 1 2 1 2 1 2 S/N data for Illustration 13.10.80 27.9 Effect average response A 1D 1 A1D 2 A2D 1 A 2D 2 27.95 (31) (24) (31)   3    Similarly.10 Trial no. for all the trials the S/N data is computed and given in Table 13.Robust Design 243 TABLE 13. 13. Note that the average response of factor C at its two levels (25.5 25.4 25.1 AD 5 1 2 1 2 2 1 2 1 B 6 1 2 2 1 1 2 2 1 E 7 1 2 2 1 2 1 1 2 Operator (O) 1 2 3 Y1 Y2 Y3 31 24 24 24 29 24 21 34 24 24 21 20 28 21 24 24 31 24 34 28 24 21 24 28 S/N ratio (h ) 28.0 24.36 28.3)] is I =  10 log  1 n   i =1 n 1   Yi2      = 28.95 27. . Hence. The quality characteristic under consideration is larger—the better type (maximization of weld strength). the optimum process parameter combination is A1 B1 C2 D1 E2. the S/N data can be analysed using either response graph method or the ANOVA method. The corresponding objective function [Eq.8 24.8 25.0 25.18 28.89 1 2 3 4 5 6 7 8 Now treating the S/N ratio as response.5 The objective of the study is to maximize the weld strength. S/N data analysis using ANOVA The response (S/N) totals for all the factor effects is given in Table 13. 4608 2.2685.95)2 + (27.00 Sum of squares 0. SSD = 0. SSC = 0.4608 Similarly.43 14.12 Source of variation A B C* D* E AD CD Pooled error Total *Pooled into error ANOVA for S/N data for Illustration 13.23 C(%) 10..29 112. SSB = 2.2685 0.57)2 + (110.60 Grand mean ( Y ) = 223..85 0.5100 0.0296 F0 15.43 18.64 — — 29.40 11.0392 0.12.89)2 – CF = 6232.8569 – 6228. SSAD = 0.8712 0.3961 Total sum of squares (SSTotal) = (28.5100 0.54 112.93 110.41 112.11 Factor effects A B C D E AD CD Computation of sum of squares: Grand total = 223.22 Total number of observations = 8 Response (S/N) totals Level 1 112.65 109.0200 0.60)2 + (27.8712 0.9982 – 6228.57 113.3961 = 0. TABLE 13.6021 SSA = (112.6021 .61 17.68 110.22)2/8 = 6228.0200.244 Applied Design of Experiments and Taguchi Methods TABLE 13.29 0.62 Level 2 110.94 9.48 111.5100 The ANOVA for the S/N data is given in Table 13.65)2 /4 – CF = 6228.74 111.4608 2.8712.4324 0.90)2 + . SSCD = 0.3961 = 4. SSE = 0.33 111.08 100.0592 4.01 49.1 Degrees of freedom 1 1 1 1 1 1 1 2 7 Means square 0.0392.22/8 = 27.56 76.4324.4324 0.2685 — — 0.81 110. + (28.9025 Correction Factor (CF) = (223.89 111. TABLE 13. Suppose we want to study 7 control factors (A. E. the optimum process parameter combination is A 1 B1 C1 D1 E2.6 INNER/OUTER OA PARAMETER DESIGN When we have more than two noise factors. 1 2 3 4 5 6 7 8 1 A 1 1 1 1 2 2 2 2 2 B 1 1 2 2 1 1 2 2 . the optimal factor levels for A.13 Effect average response A1D1 A1D2 A2D1 A2D2 28.97 27.37 Effect average response C1 C2 E1 E2 27.86 27. each at two levels and three noise factors (X. Since the two interactions AD and CD are significant.13. F and G). TABLE 13. called an Inner OA and another OA for noise factors which is called Outer OA. D. C. E.48 27. Accordingly from Table 13.28 27.83 27. Y and Z) each at two levels.14 Inner/Outer OA design L4 OA (Outer Array) Z Y X 3 C 1 1 2 2 2 2 1 1 4 D 1 2 1 2 1 2 1 2 5 E 1 2 1 2 2 1 2 1 6 F 1 2 2 1 1 2 2 1 7 G 1 2 2 1 2 1 1 2 1 1 1 Y1 * * * * * * * * 2 2 1 Y2 * * * * * * * * 2 1 2 Y3 * * * * * * * * 1 2 2 Y4 * * * * * * * * L8 OA (Inner Array) Trial no.63 28.43 27. For designing an experiment for this problem. C and D should be selected based on the average response of the interaction combinations.14 27. The average response (S/N) is given in Table 13.65 We know that the optimal levels for factors (based on S/N data) are always selected corresponding to the maximum average response. The structure of the design matrix is given in Table 13. B.66 28.04 Effect average response A1 A2 B1 B2 28.57 27. whose contribution is less are pooled into the error term and other effects are tested.Robust Design 245 Note that the two effects.13. we use one OA for control factors. This result is same as that obtained with the raw (original) data.67 27. AD and CD are significant.14. For this problem we can use L 8 as Inner OA and L4 as Outer OA. we normally use Inner/Outer OA parameter design.44 27. B. Depending on the number of control factors and the number of noise factors appropriate OAs are selected for design. At 5% level of significance. it is found that A.85 Average response (S/N) of the significant effects Effect average response C1 D 1 C1 D 2 C2 D 1 C2 D 2 28. 13. 3 8.2 9 15.2 9.2 Inner/Outer OA: Smaller—the better type of quality characteristic Suppose an experiment was conducted using Inner/Outer OA and data were collected as given in Table 13.8)2 + (10. Thus.6 8.4 2 2 1 Y2 9. For each trial of inner OA.7 9. For example.5 10 1 2 2 Y4 10.7)2    1 = –10 log  4 = –10 log(108.1)].6 8. the S/N ratio is computed.3 2 1 2 Y3 10.6 13 12. each observation is from one separate experiment. If the objective is to minimize the response.8)2 + (10.6 L8 OA (Inner Array) Trial no.15 Data for Illustration 13. Treating this S/N ratio as response.315) = –20.6 14. ILLUSTRATION 13.16.8 13.4 9. 1 by setting all control factors at first level and all noise factors at their first level and Y2.2 L4 OA (Outer Array) Z Y X 3 C 1 1 2 2 2 2 1 1 4 D 1 2 1 2 1 2 1 2 5 E 1 2 1 2 2 1 2 1 6 F 1 2 2 1 1 2 2 1 7 G 1 2 2 1 2 1 1 2 1 1 1 Y1 10.4 9.7 11. For example. for Trial 1.4 14.8 12. 1 2 3 4 5 6 7 8 1 A 1 1 1 1 2 2 2 2 2 B 1 1 2 2 1 1 2 2 .3)2 + (9. TABLE 13.8 10.246 Applied Design of Experiments and Taguchi Methods Note that the control factors are assigned to the inner array and noise factors are assigned to the outer array.5 14. we use smaller—the better type of quality characteristic to compute S/N ratios [Eq. the S/N ratio is 1 n  Yi2     (10.15.3 8. That is I =  10 log   I =  10 log   1 n  Yi2    The S/N ratios obtained are given in Table 13. Y3 and Y4 are obtained by changing the levels of noise factors alone.8 11.7 12. (13.2 17.5 10.3 9. 4 13.6 6. observation Y1 is obtained for Trial no. the data is analysed to arrive at the optimal levels for the control factors. There are 32 (8 ´ 4) separate test conditions (experiments).4 14.35 This S/N data can be analysed using either response graph method or ANOVA. 4 14.8 11.73 –78.24 7 C –21.8 10.71 2.8 13.18 gives the factor effects and their ranking.15 –19.3 2 1 2 Y3 10.85 –19.20 –21.5 14.05 0.19 –23.77 –84.3 8.5 10.18 Factor Level 1 Level 2 Difference Rank Factor effects (average of S/N ratio) and ranks for Illustration 13.6 8.50 5 D –22. 1 2 3 4 5 6 7 8 1 A 1 1 1 1 2 2 2 2 2 B 1 1 2 2 1 1 2 2 3 C 1 1 2 2 2 2 1 1 4 D 1 2 1 2 1 2 1 2 5 E 1 2 1 2 2 1 2 1 6 F 1 2 2 1 1 2 2 1 7 G 1 2 2 1 2 1 1 2 h Y4 10.00 –20.56 4 B –20.38 –18.Robust Design 247 TABLE 13.44 3 .3 8.21 0.4 9.58 –80.2 Level 1 –82.32 6 G –20.7 11.85 –86.65 –21.70 –88.2 17.6 –20.6 13.17 –21.15 Data analysis using response graph method Now.8 12.3 9. we treat the S/N ratio as the response data and analyse it as discussed in Chapter 12.44 1 E –20.68 0.85 Level 2 –84.5 10.77 –21.2 A –20.17 Factors A B C D E F G Level totals of S/N ratios for Illustration 13.4 13.07 –80.52 2 F –20.7 12.6 14.43/8).0 15.18 –20.4 9.16 S/N ratios for Illustration 13. The level totals of the S/N ratios are given in Table 13.0 1 2 2 L8 OA (Inner Array) Trial no. TABLE 13.09 0.6 8.43 and the grand mean is –20.69 1.84 –84. TABLE 13.65 1.36 –86.93 (–167.2 Z Y X 1 1 1 Y1 10.4 2 2 1 Y2 9.58 The grand total is –167.4 14.17.7 9.31 –23.59 –83.19 –82.0 12.35 –21.2 9.24 –84. Table 13.2 9.21 –21.6 6.81 –21.66 –83. 248 Applied Design of Experiments and Taguchi Methods Figure 13.1 shows the response graph for Illustration 13.2. D2 –19.5 –20.0 Average response –20.5 –21.0 –21.5 A2 B2 C1 E2 D1 Factor levels F2 A1 B1 C2 G1 E1 F1 –22.0 G2 FIGURE 13.1 Response graph for Illustration 13.2. Predicting optimum condition: From Table 13.18, it can be seen that factors D, E and G are significant (first three ranks). Hence the predicted optimum response in terms of S/N ratio (hopt) is given by Iopt = I + (D2  I ) + (E1  I ) + (G1  I ) = D2 + E1 + G1  2  (13.5) I = –19.71 – 20.17 – 20.21 – 2 ´ (–20.93) = –60.09 + 41.86 = –18.23 It is observed from Table 13.16 that the maximum S/N ratio is –18.31 and it corresponds to the experimental Trial no. 6. The optimum condition found is D2 E1 G1. And the experimental Trial no. 6 includes this condition with a predicted yield of –18.23. From this we can conclude that the optimum condition obtained is satisfactory. However, the result has to be verified through a confirmation experiment. Data analysis using ANOVA The sum of squares is computed using the level totals (Table 13.17). Correction factor (CF) = (167.43)2 = 3504.10 8 Robust Design 249 SSA = (  82.59)2 + (  84.84)2  CF 4 = 3504.73 – 3504.10 = 0.63 Similarly, all factor sum of squares are obtained. SSB = 0.11, SSC = 0.49, SSD = 11.83, SSE = 4.67, SSF = 0.21, SSG = 4.10 SSTotal = 22.04 These are summarised in Table 13.19. Since we have only one replicate, experimental error (pure error) will be zero. And we have to use pooled error for testing the effects which is shown in Table 13.19. At 5% level of significance, it is observed that factors D, E and G are significant. And these three factors together account for about 93% of total variation. This result is same as that obtained in the response graph method. The determination of optimal levels for these factors and hOpt is similar to that of response graph method. TABLE 13.19 Source of variation A* B* C* D E F* G Pooled error Total *Pooled into error ANOVA for Illustration 13.2 Degree of freedom 1 1 1 1 1 1 1 4 7 Mean square – – – 11.83 4.67 – 4.10 0.36 F0 – – – 32.86 12.97 – 11.39 6.53 C(%) 2.86 0.50 2.22 53.68 21.19 0.95 18.60 100.00 Sum of squares 0.63 0.11 0.49 11.83 4.67 0.21 4.10 1.44 22.04 ILLUSTRATION 13.3 Inner/Outer OA: Larger—the better type of quality characteristic In the machining of a metal component, the following control and noise factors were identified as affecting the surface hardness. Control factors: Speed (A), Feed (B), Depth of cut (C) and Tool angle (D) Noise factors: Tensile strength of material (X), Cutting time (Y) and Operator skill (Z) Control factors were studied each at three levels and noise factors each at two levels. L9 OA was used as inner array and L4 as outer array. The control factors were assigned to the inner array and the noise factors to the outer array. The objective of this study is to maximize the surface 250 Applied Design of Experiments and Taguchi Methods hardness. The data collected (hardness measurements) along with the design is given in Table 13.20. TABLE 13.20 Design and data for Illustration 13.3 Z Y X Trial no. 1 2 3 4 5 6 7 8 9 1 A 1 1 1 2 2 2 3 3 3 2 B 1 2 3 1 2 3 1 2 3 3 C 1 2 3 2 3 1 3 1 2 4 D 1 2 3 3 1 2 2 3 1 1 1 1 Y1 273 253 229 223 260 219 256 189 270 1 2 2 Y2 272 254 231 221 260 216 257 190 267 2 1 2 Y3 285 270 260 240 290 245 290 210 290 2 2 1 Y4 284 267 260 243 292 245 290 215 290 S/N ratio (h ) 48.89 48.31 47.73 47.27 48.76 47.23 48.68 46.02 48.90 Total = 431.79 Mean = 47.98 Data analysis: The data of Illustration 13.3 is analysed by ANOVA. For each experiment (trial no.), there are four observations (Y1, Y2, Y3 and Y4). Using this data, for each trial, the S/N ratio is computed. Since the objective is to maximize the hardness, the objective function is larger the better type of quality characteristic for which the S/N ratio is given by Eq. (13.3). For the first trial, the data are 273, 272, 285 and 284. Substituting the data in Eq. (13.3), we get I1 =  10 log    = 48.89 1 =  10 log   Y2  2  n  4 (273) 1 i 1   1    + 1 (272) 2 + 1 (285) 2 +   (284)    1 2 Similarly, for all trials S/N ratios are computed and given in Table 13.20. This S/N ratio data is analysed through ANOVA. Computation of sum of squares and ANOVA: Table 13.21. TABLE 13.21 Factor A B C D The level totals of S/N ratios are given in Level totals of S/N ratios for Illustration 13.3 Level 1 144.93 144.84 142.14 146.55 Level 2 143.26 143.09 144.48 144.22 Level 3 143.60 143.86 145.17 141.02 Robust Design 251 Correction factor (CF) = (431.79)2 = 20715.84 9 2 2 2 A1 + A2 + A3  CF 3 SSA = (13.6) = (144.93)2 + (143.26)2 + (143.60)2  20715.84 = 0.52 3 Similarly, SSB = 0.52, SSC = 1.68, SSD = 5.14 As we have only one replicate, experimental error (pure error) will be zero. And we have to use pooled error for testing the effects. The ANOVA for Illustration is given in Table 13.22. At 5% level of significance only the tool angle has significant influence on the surface hardness. And the depth of cut also has considerable influence (21% contribution) on the hardness. TABLE 13.22 Source of variation Speed (A)* Feed (B)* Depth of cut (C) Tool angle (D) Pooled error Total *Pooled into error ANOVA for Illustration 13.3 Degree of freedom 2 2 2 2 4 8 Mean squares – – 0.84 2.57 0.26 F0 – – 3.23 9.88 C(%) 6.62 6.62 21.37 65.39 Sum of squares 0.52 0.52 1.68 5.14 1.04 7.86 Optimal levels for the control factors: The average response in terms of S/N ratio of each level of all factors is given in Table 13.23. TABLE 13.23 Factors A B C D Average response (S/N ratio) for Illustration 13.3 Level 1 48.31 48.28 47.38 48.85 Level 2 47.75 47.70 48.16 48.07 Level 3 47.87 47.95 48.39 47.01 The optimal levels for all the factors are always selected based on the maximum average S/N ratio only irrespective of the objective (maximization or minimization) of the problem. Accordingly, the best levels for the factors are A1 B1 C3 D1. 252 Applied Design of Experiments and Taguchi Methods Predicting the optimum response: Since the contribution of A and B is negligible, we can consider only factors C and D for predicting the optimum response. Iopt = I + (C3  I ) + (D1  I ) = 47.98 + (48.39 – 47.98) + (48.85 – 47.98) = 49.26 (13.7) 13.7 RELATION BETWEEN S/N RATIO AND QUALITY LOSS When experiments are conducted on the existing process/product, it is desirable to assess the benefit obtained by comparing the performance at the optimum condition and the existing condition. One of the measures of performance recommended in robust design is the gain in loss per part (g). It is the difference between loss per part at the optimum condition and the existing condition. g = Lext – Lopt = K(MSD ext – MSDopt) It can also be expressed in terms of S/N ratio (h) = –10 log (MSD). Let he = S/N ratio at the existing condition ho = S/N ratio at the optimum condition Me = MSD at the existing condition Mo = MSD at the optimum condition Therefore, ÿ ÿ ho = – 10 log M o (13.8) Mo = Io 10 10 and Me = Ie 10 10 If RL is the proportion of loss reduction, then  M RL = o = 10  Me  Io +Ie   10  X = 10 10 where X = ho – he . It is the gain in signal ratio. We can also express RL in another way Suppose RL = 0.5X/K Then, 10–X/10 = 0.5X/K That is, 10–1/10 = 0.51/K Taking logarithm on both sides and simplifying, we obtain K = –10 log 0.5 = 3.01, approximately 3.0 Io Ie Therefore, reduction in loss (RL) = 10 –X /10 = 0.5 X /3 = 0.5 3 (13.9) Robust Design 253 RL is a factor by which the MSDext has been reduced. Since RL = MSD opt MSD ext MSDopt = RL(MSDext) and g = K(MSDext – MSDopt) = K(MSDext – RL * MSDext) = K. MSDext (1 – RL) = K. MSDext Io Ie   1  0.5 3       (13.10) Note that the quantity K. MSDext is the loss at the original (existing) condition. Suppose the value of the quantity in the parenthesis of Eq. (13.10) is 0.40. That is, g = K. MSDext.(0.40) This indicates that a savings of 40% of original loss to the society is achieved. ILLUSTRATION 13.4 Estimation of Quality Loss In Illustration 13.2, we have obtained hopt as –18.23 [Eq. (13.5)]. For the same problem in Illustration 13.2, suppose the user is using D1, E1, and G1 at present. For this existing condition we can compute the S/N ratio (hext). By comparing this with hopt, we can compute gain in S/N ratio. Using data from Table 13.18, Iext = I + (D1  I ) + (E1  I ) + (G1  I) (13.11) = D1 + E1 + G1  2  I = –22.15 – 20.17 – 20.21 – 2 ´ (–20.93) = –62.53 + 41.86 = –20.67 Therefore, the reduction in loss (RL) to society is computed from Eq. (13.10). g = K MSDext  Iopt  Iext       3 1  0.5         g = K MSDext  18.23 + 20.67      3  1  0.5     254 or Applied Design of Experiments and Taguchi Methods g = K MSDext (1 – 0.50.81) = K MSDext (1 – 0.57) = K MSDext ´ 0.43 This indicates that there is a saving of 43% of the original loss to society. PROBLEMS 13.1 An experiment was conducted with seven main factors (A, B, C, D, E, F, and G) using L8 OA and the following data was collected (Table 13.24). Assuming larger—the better type quality characteristic, compute S/N ratios and identify the optimal levels for the factors. TABLE 13.24 Trial no. 1 2 3 4 5 6 7 8 Data for the Problem 13.1 Response R1 R2 R3 11 4 4 4 9 4 1 14 4 4 1 0 8 1 4 4 11 4 14 8 4 1 4 8 A 1 1 1 1 1 2 2 2 2 B 2 1 1 2 2 1 1 2 2 Factors/Columns C D E F 3 4 5 6 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 1 2 2 1 1 2 2 1 G 7 1 2 2 1 2 1 1 2 13.2 Suppose data is available from an Inner/Outer OA experiment (Table 13.25). (a) Assuming smaller—the better type of quality characteristic, compute S/N ratios. (b) Analyse S/N data using response graph method and determine significant effects. (c) Compute S/N ratio at the optimum condition (hopt). (d) Assuming that all the significant factors are currently at their first level, compute S/N ratio for the existing condition (hext). (e) Relate the gain in loss to society to the gain in signal to noise ratio. 1 2 3 4 5 6 7 8 1 A 1 1 1 1 2 2 2 2 2 B 1 1 2 2 1 1 2 2 3 C 1 1 2 2 2 2 1 1 4 D 1 2 1 2 1 2 1 2 5 E 1 2 1 2 2 1 2 1 6 F 1 2 2 1 1 2 2 1 7 G 1 2 2 1 2 1 1 2 13. (c) Perform ANOVA and determine the significant effects. (a) Assume larger—the better type of quality characteristic and compute S/N ratios.Robust Design 255 TABLE 13. (b) Compute sum of squares of all factors.2 L4 OA (Outer Array) Z Y X 1 1 1 Y1 39 52 31 45 35 23 18 26 2 2 1 Y2 38 53 36 42 32 22 14 23 2 1 2 Y3 35 55 34 43 33 21 20 25 1 2 2 Y4 36 50 35 45 37 25 21 22 L8 OA (Inner Array) Trial no.25 Data for Problem 13.3 Consider the data in Problem 13.2. . Suppose we have a problem where one factor has to be studied at four-level and others at two-level.C H A P T E R 14 Multi-level Factor Designs 14. 14. Dummy treatment: In this method. When these standard arrays are not suitable for studying the problem under consideration. Merging of columns: In this method two or more columns of a standard OA are merged to create a multi-level column. accommodating a three-level factor into a two-level OA. Idle column method: This method can be used to accommodate many three-level factors into a two-level OA. These modified OAs when used for designing an experiment are termed multi-level factor designs. There is no standard OA which can accommodate one four-level factor along with two-level factors or a three-level factor with twolevel factors.2 METHODS FOR MULTI-LEVEL FACTOR DESIGNS The methods which are often employed for accommodating multiple levels are as follows: 1. 256 . 3. a four-level factor can be fit into a two-level OA. we may have to modify the standard orthogonal arrays. When we deal with discrete factors and continuous variables. we encounter with multi-level factor designs. In this chapter some of the methods used for modification of standard OAs and their application are discussed. The advantage of method 3 and 4 is a smaller experiment. The first two methods do not result in loss of orthogonality of the entire experiment.1 INTRODUCTION Usually. For example. To deal with such situations we have to modify a two-level standard OA to accommodate three-level or four-level factors. 2. we make use of the standard Orthogonal Arrays (OAs) for designing an experiment. Methods 3 and 4 cause a loss of orthoganality among the factors studied by these methods and hence loss of an accurate estimate of the independent factorial effects. Combination method: This method can be used to fit two-level factors into a three-level OA. 4. For example. one of the levels of a higher level factor is treated as dummy when it is fit into a lesser number level OA. 2. any two columns will contain the pairs (1. 1) and (2. In a two-level OA. such as 1. 1) and level 4 to (2. level 3 to (2. 2).1 Merging Columns Merging columns method is more useful to convert a two-level OA to accommodate some fourlevel columns. For creating the four-level column. The computation of sum of squares for the four-level column is SSA = 2 2 2 2 A1 + A2 + A3 + A4  CF nA (14. (1. we have to merge (combine) three two-level columns to create one four-level column. The L8 OA modified with a four-level columns is given in Table 14.1 Trial no. 1 1 2 3 4 5 6 7 8 1 1 2 2 3 3 4 4 2 1 2 1 2 1 2 1 2 L8 OA modified for a four-level factor Columns 3 1 2 1 2 2 1 2 1 4 1 2 2 1 1 2 2 1 5 1 2 2 1 2 1 1 2 .2. 2 and 3) of L8 OA to create a four-level column.1) where nA is the number of observations in the total Ai. 2). 2). 1). TABLE 14. 2 and 3 of L8 OA Columns 2 1 1 2 2 1 1 2 2 Four-level column 3 1 1 2 2 2 2 1 1 1 1 2 2 3 3 4 4 Suppose factor A is assigned to the four-level column. 2 and 3 or 2. (2.Multi-level Factor Designs 257 14. TABLE 14. So.2 Trial no. 1) level 2 to (1.1 gives the merging of columns (1. The fourlevel factors will have 3 degrees of freedom (df) and a two-level column has 1 degree of freedom. the following procedure is followed. The four-level column is created by assigning level 1 to pair (1. It is recommended to merge three mutually interactive columns. The merging of mutually interactive columns minimizes the confounding of interactions. 1 2 3 4 5 6 7 8 1 1 1 1 1 2 2 2 2 Merging columns 1. 6 and 4 of L8 OA. 2). Suppose we want to introduce a four-level column into a two-level OA. Table 14. 3. The factor A has 1 degree of freedom. Similarly. whereas the column has 2 degrees of freedom.2) The interaction degrees of freedom = 3 ´ 1 = 3 (equal to the sum of df of columns 5. FIGURE 14. Interaction between the four-level and two-level factor Suppose we assign factor A to the four-level column obtained by merging columns 1. we can assign a two-level factor to a three-level column of L9 OA and treat one of the levels as dummy. So. one with two levels and the other three factors each at three levels.1). Conversion of L9 OA to accommodate one two-level factor Suppose. 2 and 3 of standard L8 OA. SSAB = SS5 + SS6 + SS7 (14. this extra one degree of freedom has to be combined with the error. we can also modify the standard L16 OA to accommodate more than one four-level factor. 2 and 3 of standard L8 OA and factor B to the column 4 of the standard L8 OA. For example. to make one of the levels of a factor as dummy. we want to study four factors. The appropriate OA here is L9. That is. 6 and 7). a three-level factor can be assigned to a four-level column and treat one level as dummy. The Interaction sum of squares SSAB is given by the sum of squares of columns 5.1 shows the linear graph for the AB interaction. Note that level 3 is dummy treated as given in Table 14. Accordingly.2 Dummy Treatment Any factor has to be studied with a minimum of two levels in order to obtain the factor effect. 6 and 7 (Figure 14.2.2 has been obtained by merging columns 1. Figure 14. Other combinations of the four-level factor can be obtained by selecting any other set of three mutually interactive columns from the triangular table of L8 OA (Appendix B).258 Applied Design of Experiments and Taguchi Methods Table 14. So. it is modified with a dummy treatment as given in Table 14.3. The sum of squares of factor A is given by SS A =  )2 ( A1 + A1 A2 + 2  CF n A1 + n A1 n A2 (14. We can use any one level as dummy. like this.3) . the OA should have at least three levels. 14. When we want to assign a two-level factor (A) to the first column of the standard L9 OA.1 Interaction between a four-level factor and a two-level factor. dummy level designs can be used with columns that have three or more levels. we create a four-level column using the method of merging columns. Using dummy treatment. Suppose we want to create a three-level column in L8 OA. 1 2 3 4 5 6 7 8 9 Dummy level design of L9 OA 1(A ) 1 1 1 2 2 2 1¢ 1¢ 1¢ 2 1 2 3 1 2 3 1 2 3 3 1 2 3 3 2 1 3 1 2 4 1 2 3 3 1 2 2 3 1 Conversion of a two-level OA to accommodate a three-level factor It involves the following two-step procedure Step 1: Step 2: Obtain a four-level factor from a two-level OA. TABLE 14.Multi-level Factor Designs 259 The error sum of squares associated with the dummy column is given by SSeA = )2 (A1  A1  n A1 + n A 1 (14. The sum of squares is computed as follows: . the fourlevel column is converted into a three-level factor column. 2. 3 1 1 2 2 3 3 4 4 L8 OA Dummy treatment for factor A (three-level) Alternate three-level options from the four-level 1 1 2 2 3 3 1¢ 1¢ 1 1 2 2 3 3 2¢ 2¢ 1 1 2 2 3 3 3¢ 3¢ Suppose factor A with three levels is assigned to the dummy column. TABLE 14. obtain three-level factor from the possible four levels. This column has 3 degrees of freedom whereas the factor A has 2 degrees of freedom. Then using dummy treatment method. This is given in Table 14.4) This error (SSeA) has to be added with the experimental error (pure error) along with its degree of freedom (1 df). So.4. the extra one degree of freedom is to be combined with the experimental error (pure error). First.4 Four-level 1.3 Trial no. 5). N = 18 CF = SS A = 1182 T2 = = 773. we have collected the data using the following Data for dummy-level design B 2 1 2 3 1 2 3 1 2 3 C 3 1 2 3 2 3 1 3 1 2 D 4 1 2 3 3 1 2 2 3 1 Response R1 R2 15 4 0 6 6 12 6 9 0 13 3 1 7 5 14 5 10 2 The analysis of the data using ANOVA is explained below.5) SSe A = (14.6 Level A 1 2 3 1¢ 36 50 – 32 Response totals for the dummy-level design Factors B C 52 37 29 – 73 22 23 – D 41 44 33 – Grand total (T) = 118. TABLE 14.56 N 18  )2 (A1 + A1 A2 + 2  CF n A1 + n A1 n A2 .260 Applied Design of Experiments and Taguchi Methods SS A =  )2 A2 A2 ( A1 + A1 + 2 + 3  CF n A1 + n  n A2 n A3 A1  )2 ( A1  A1 1 n A1 + n A (14. 1 2 3 4 5 6 7 8 9 A 1 1 1 1 2 2 2 1¢ 1¢ 1¢ Suppose.1 is given in Table 14.6. TABLE 14.6) Data analysis from dummy-level design: dummy-level design (Table 14.5 Experiment no. The response totals for Illustration 14. 56 6 = 45.7 Source A B C D* Sum of squares 28.75.44 10. + 102 + 22 – CF = 1152.12 378.44 SSB = 2 2 2 B1 + B2 + B3  CF nB = 522 + 372 + 292  773.56 = 28.89 .17 12..01 21.33 = 9.44 – 283.72 _ _ _ 1.12 = 3.12 = 4.44 SSeA =  )2 ( A1  A1 (36  32)2 16 = = = 1.2.44 22.92 80.Multi-level Factor Designs 261 = 682 50 2  CF + 6+6 6 = 385.56 = 378. SSC = 283.56 _ _ _ _ Significance Significant Significant Significant e* A Error epooled Total * Pooled into error.76 F0 16.33 + 416.44 Similarly.67 – 773.44 283.7.78 – 1.33 n A1 + n  6 + 6 12 A1 SSe = SSTotal – SSA – SSB – SSC – SSD – SSeA = 378.44 ANOVA for the dummy-level design Degrees of freedom 1 2 2 2 1 9 12 17 Mean squares 28.44 and SSD = 10.78 SSTotal = 152 + 42 + .33 9.1.44 45.44 – 28.78 1.01 The computations are summarized in the ANOVA Table 14.44 – 45.. F5%. TABLE 14. F5%.44 – 10.00 – 773.72 141. A2B1 and A2B2 Suppose.2. How to combine two two-level factors A and B to study their effects is explained as follows: The four possible combinations of two-level factors A and B are A1B1. by comparing A2B2 with A2B1. If the number of factors is at the limit of the number of columns available in an OA and the next larger OA is not economical to use and we do not want to eliminate factors to fit the OA. we use the combination method. A2B1 and A2B2 selected are assigned to the column 1 of L9 OA as follows. In this method a pair of two-level factors is treated as a three-level factor. AB1 (Table 14. the orthogonality between A and B is lost. we select the test conditions A1B1. Now by comparing A1B1 with A2B1. interaction between A and B cannot be studied. say. D and E. A2B2 and one of the remaining two. Only under certain circumstances combination method is used. Because one test condition A1B2 is not tested. the effect of factor B can be estimated (factor A is kept constant). A2B2 and A 2B1) can be assigned to one column in a three-level OA as a combined factor AB (Table 14.262 Applied Design of Experiments and Taguchi Methods 14.8 Trial no. A1 B2. the effect of A can be estimated (factor B is kept constant).3 Combination Method Column merging and dummy treatment methods are widely used. AB2 is assigned to A2B1 and AB3 is assigned to A2B2 (Table 14. A2B1 .8). Table 14. TABLE 14. Suppose we want to study a mixture of two-level and three-level factors.9). AB 1 1 2 3 4 5 6 7 8 9 1 1 1 2 2 2 3 3 3 Combined factors assignment Factors/columns C 2 1 2 3 1 2 3 1 2 3 D 3 1 2 3 2 3 1 3 1 2 E 4 1 2 3 3 1 2 2 3 1 The three combinations A1B1. These three test conditions (A1 B1. . Similarly.8) is assigned to A1B1. However. We can use a two-level OA or a three-level OA with dummy treatment method depending on the number of two-level or three-level factors.9 also gives the assignment of three-level factors C. The interaction between these two factors cannot be studied. 9 gives the experimental results. The data are analysed as follows. That is. Similarly.9 Trial no. AB 1 1 2 3 4 5 6 7 8 9 1 1 1 2 2 2 2 2 2 Combined factor layout design Factors/Columns C 2 1 1 1 1 1 1 2 2 2 1 2 3 1 2 3 1 2 3 D 3 1 2 3 2 3 1 3 1 2 E 4 1 2 3 3 1 2 2 3 1 4 7 10 5 9 1 14 5 11 Response (Y) Total = 66 Data analysis from combined factor design: Suppose. The response totals for all the factors is given in Table 14.00 9 N SSTotal = (4)2 + (7)2 + … + (11)2 – CF = 614 – 484 = 130.10 Level AB 1 2 3 21 15 30 A (B1 constant) 21(A1B1 ) 15(A2B1 ) — Response totals for the combined design Factors (effects) B (A2 constant) 15(A2B1) 30(A2B2) — C 23 21 22 D 10 23 33 E 24 22 20 Note that the totals of A and B are obtained such that the individual effects of A and B are possible. total for B1 and B2 are obtained from A2B1 and A2B2.10. A1 = 21 (combination of A 1B1) and A2 = 15 (combination of A 2B1) Since B1 is kept constant in these two combinations. the last column of Table 14.00 . TABLE 14.Multi-level Factor Designs 263 TABLE 14. Computation of sum of squares: CF = T2 (66)2 = = 484. effect of A can be obtained. we need to compute SSA and SSB.5 3 6 For orthogonality between A and B . SS A = (A1 ) 2 + (A2 ) 2 (A1 + A2 ) 2  nA n A1 + n A2 = = (21)2 + (15)2 (36)2  3 6 666 1296  = 222 – 216 = 6.66 – 484 = 0.00 3 = 522 – 484 = 38. SSAB ¹ SSA + SSB 38 ¹ 6 + 37.00 3 6 SSB = (B1 )2 + (B2 )2 (B1 + B2 )2  nB nB1 + nB2 = (15)2 + (30)2 (45)2  = 375 – 337.67 + 2.5 .67 – 484 = 88.00 Since only one replication is taken the experimental error (SSe) is obviously zero.5 = 37.67 – 484 = 2.66 + 88. SSC.10) nAB = (21)2 + (15)2 + (30)2 – 484.00 Similarly. we must have SSAB = SSA + SSB But here we have.67 3 SSD = SSE = SSe = SSTotal – SSAB – SSC – SSD – SSE = 130 – (38 + 0.66 3 (10)2 + (23)2 + (33)2 – 484 = 572.67 3 (24)2 + (22)2 + (20)2 – 484 = 486.264 Applied Design of Experiments and Taguchi Methods SSAB = (AB1 )2 + (AB2 )2 + (AB3 )2 – CF (AB totals are arrived from Table 14. SSD and SSE are computed SSC = (23)2 + (21)2 + (22)2 – 484 = 484. In order to test the individual effects of A and B.67) = 130 – 130 = 0. 5 = 5. DAB = SSAB – (SSA + SSB) = 38 – (6 + 37. TABLE 14. To take care of this. For example.4 Idle Column Method The idle column method is used to assign three-level factors in a two-level OA. against level 1 of the idle column we assign level 1 and 2 of the three-level factor and against level 2 of the idle column we assign levels 2 and 3 of the three-level factor.5 –5. For two or more three-level factors. A is assigned to column 2 of L8 OA.6 48. A and B are not orthogonal.2. F5%. always the first column of the two-level OA is the idle column. The analysis of variance is given in Table 14.2. If a three-level factor say. a correction to the sum of squares has to be made in the ANOVA.7 Significance Significant Significant 57.5 0.33 — 0.6 Significant * Pooled into error. But the idle column design is more efficient even though one column is left idle (no assignment).1. F5%. The levels of this column are the basis for assigning the three-level factors.66 88.5 Note that there is no degree of freedom associated with DAB because AB has 2 degrees of freedom and A and B has 1 degree of freedom each.5 = 6. The idle column is the common column in the mutually inter active groups where three-level factors are assigned.79 14.61.5 — — 44.11 ANOVA for the combined factor design Source AB A* B D* AB C* D E* Pooled Total Sum of squares 38. Its levels serve as the basis for the assignment of the three-level factors.5) = –5. The idle column should always be the first column of the two-level OA.67 2.11.00 Degrees of freedom 2 1 1 0 2 2 2 5 8 Mean square 19 — 37.00 6. And it is so in the combination design.Multi-level Factor Designs 265 That is. .67 3. idle column method reduces the size of the experiment but sacrifies the orthogonality of the three-level factors.77 F0 24. if another factor B with three-levels is assigned to column 4. Table 14. A three-level factor can be assigned in a two-level OA by creating a four-level column and making 1 level as dummy.12 gives the idle column assignment of one three-level factor (A). the interaction column 5 is eliminated. This design is not efficient since we waste 1 degree of freedom in the error due to the dummy level. As already mentioned.00 37. the interaction column 3 is dropped. Suppose the difference between SSAB and (SSA + SSB) is D AB.83 130. Similarly. the Idle column corresponds to the first column of the standard L8 OA and columns 2 and 3 corresponds to the columns 2 and 4 of L8 OA respectively. Columns 4 and 5 of Table 14. Idle column assignment in L8 OA Factors/Columns Idle A 1 2 3 1 1 1 1 2 2 2 2 1 1 2 2 2 2 3 3 1 1 2 2 2 2 1 1 1 2 3 4 5 6 7 8 Suppose we have two three-level factors (A and B) and two two-level factors (C and D).266 Applied Design of Experiments and Taguchi Methods TABLE 14.13. Data analysis from idle column method: Consider the response data from one replication given in Table 14.13 Trial no.13 corresponds to the columns 6 and 7 of the standard L8 OA respectively. we require 6 degrees of freedom. Idle 1 1 2 3 4 5 6 7 8 1 1 1 1 2 2 2 2 Idle column factor assignment Factors/Columns A B C 2 3 4 1 1 2 2 2 2 3 3 1 2 1 2 2 3 2 3 1 2 2 1 1 2 2 1 Response D 5 1 2 2 1 2 1 1 2 Total 10 12 6 4 1 8 4 7 52 In Table 14.12 Trial no. Hence. TABLE 14.13.13 gives the assignment of these factors and data for illustration. we can use L8 OA for assignment. . To study these factors. The response totals are computed first as usual (Table 14.14). Columns 3 and 5 of the standard L8 OA are dropped. Table 14. Multi-level Factor Designs 267 TABLE 14. The sum of squares for the idle column factors A and B are computed as follows:  AI 2 SSA12 =  11  n AI  11 2   AI12 +   nAI   12  (AI11 + AI12 )2    nAI11 + n AI12  = (22)2 (10)2 (22 + 10)2 +  2 2 2+2 = 292 – 256 = 36.00 .00 The sum of squares of the idle column SSI is computed from the idle column. SSC = 8.00 8 SSTotal = (10)2 + (12)2 + … + (7)2 – CF = 426 – 338 = 88. and SSD = 0.14 Level Idle (I) 1 2 32 20 Response totals for idle column design Factors Idle 1 (I1) C 22 30 D 26 26 Idle 2 (I2) Level 1 2 2 3 A 22 10 9 11 B 16 16 5 15 Computation of sum of squares: CF = (52)2 = 338.00  AI 2 SSA23 =  22  nAI  22 2   AI 23 +   nAI   23  (AI 22 + AI 23 )2    nAI22 + nAI23  = (9)2 (11)2 (9 + 11)2 +  2 2 2+2 = 101 – 100 = 1. SSI = 2 2 (I1 + I2 ) – CF ni where ni is the number of observations in the total Ii = (32)2 + (20)2  338 4 = 356 – 338 = 18 Similarly computing. 00 0.1. TABLE 14.71. SSB12 = (16)2 (16)2 (16 + 16)2 +  2 2 2+2 (5)2 (15)2 (15 + 5)2 +  2 2 2+2 = 256 – 256 = 0.11 — — Significance Significant Significant Significant * Pooled into error. F5%.00 SSB23 = = 125 –100 = 25.00 1.5 (from I2) If the quality characteristic is lower—the better type. A1 = From I 2 .5 and B3 = 7.15. Since factors C and D are not significant.4 = 7.00 36.25 F0 8 16 — — 11. A2. From I1 .00 8. their levels can be selected based on economies and convenience. .268 Applied Design of Experiments and Taguchi Methods Similarly. the best levels for factors A and B are A2 and B2 respectively. The best levels for A and B can be selected by computing the average response for A1.00 ANOVA for idle column experiment Degrees of freedom 1 1 1 1 1 1 1 4 Mean square 18 36 — — 25 — — 2.00 0. From the ANOVA it is seen that the factors A1–2 and B2–3 are significant.5 2 2 Similarly. A3 and B1.00 25. This indicates that the difference in the average condition A 1 to A2 and B2 to B3 is significant.00 The ANOVA summary is given in Table 14. B2 and B3.15 Source Idle A1–2 A* 2–3 B* 1–2 B2–3 C* D* Pooled error Sum of squares 18. B1 = 8 and B2 = 8 (from I1) B2 = 2.00 9.5 and A3 = = 5. A2 = 22 10 = 11 and A2 = =5 2 2 9 11 = 4. G and H) have been considered. D. F. All factors have two-levels except factor B which has four-levels. B. C. The required linear graph for Illustration 14. L16 OA is selected for the design.3. SOLUTION: Required degrees of freedom for the problem are 7 3 3 1 1 15 df df df df df Seven two-level factors: One four-level factor (B): Interaction AB: Interaction AC: Interaction AG: Total degrees of freedom: Hence.1 In a study of synthetic rubber manufacturing process. E. the experimenter wants to investigate the two-factor interactions AB. The required linear graph (Figure 14.1 is shown in Figure 14.2.1. The design layout is given in Table 14. G 6 1 7 A 1 13 C 12 3 11 9 4 5 14 15 D 2 8 B E F H 10 FIGURE 14.Multi-level Factor Designs 269 ILLUSTRATION 14. . G A C 4 5 14 15 D B E F H FIGURE 14. AC and AG.2 Required linear graph for Illustration 14. eight factors (A.3 Superimposed linear graph for Illustration 14.2) is superimposed on the standard linear graph and is shown in Figure 14. In addition. Design a matrix experiment.16.1. .2 An experimenter wants to study two four-level factors (P and Q) and five two-level factors (A. B. The standard L16 OA has to be modified to accommodate the two four-level factors.2.8. Further he is also interested to study the two factor interactions AB and AC. SOLUTION: The problem has 13 degrees of freedom. The required linear graph is shown in Figure 14.16 Trial no. D and E).4 Required linear graph for the Illustration 14. C.270 Applied Design of Experiments and Taguchi Methods TABLE 14. Design an experiment. Hence L16 OA is required.4.10 3 1 2 1 2 3 4 3 4 1 2 1 2 3 4 3 4 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 ILLUSTRATION 14.1 D 4 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 E 5 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 G 6 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 AG 7 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 AB 9 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 AB 11 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 C 12 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 AC 13 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 F 14 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 H 15 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 B AB 2. P Q A 11 1 2 15 14 7 10 3 9 B 4 5 C 6 8 D E FIGURE 14. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 A 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 Layout of the experimental design for Illustration 14. P Q A 11 1 2 15 14 7 10 3 9 B 4 5 C 6 8 D E FIGURE 14.17 Trial no. 10 1 2 1 2 3 4 3 4 1 2 1 2 3 4 3 4 . 7 1 1 2 2 2 2 1 1 3 3 4 4 4 4 3 3 Layout of the experimental design for the Illustration 14.5 Standard linear graph and its modification for Illustration 14. TABLE 14. 8.5.2 D 3 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 B 4 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 C 5 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 E 9 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 A 11 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 AC 14 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 AB 15 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 Q 2.17. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 P 1. The experimental design with the assignment of factors and interactions is shown in Table 14. 6.Multi-level Factor Designs 271 The standard linear graph and its modification to suit the required linear graph is shown in Figure 14.2. 22 and 28. TABLE 14. 60. 38.5 C 3 1 2 1¢ 2 1¢ 1 1¢ 1 2 D 4 1 2 3 3 1 2 2 3 1 Response (Y) R1 R2 12 3 1 6 11 6 8 10 0 10 5 2 8 13 5 9 8 1 Analyse the data using ANOVA and draw conclusions. 52. 14. 56.18 Trial no. 42. 60. and H). 14. Suppose the response for the 16 successive trials is 46. 40. 40. G.1 An engineer wants to study one three-level factor and four two-level factors. 1 2 3 4 5 6 7 8 9 A 1 1 1 1 2 2 2 3 3 3 B 2 1 2 3 1 2 3 1 2 3 Data for Problem 14. 50. E. 56. 30.2 Show how to modify L16 OA to accommodate one four-level factor (A) and seven twolevel factors (B. F. 14. D.5 The following data has been collected (Table 14. Analyse the data using ANOVA and draw conclusions 14. 60. 58. .272 Applied Design of Experiments and Taguchi Methods PROBLEMS 14. Design an OA experiment. Design an OA experiment.3 Design an OA experiment to study two four-level factors and five two-level factors.18) using the following dummy level design.4 An experimenter wants to study three-factors each at three-levels and all two-factor interactions. C. That is. TABLE 15. Most of the published literature on Taguchi method application deals with a single response. However. we can collect the observed data for each response using Taguchi designs and the data can be analysed by different methods developed by various researchers. However. we may get different set of optimal levels for each response.1 INTRODUCTION In Taguchi methods we mainly deal with only single response optimization problems. Some of the examples where we have more than one dependent variable or multiple responses are given in Table 15. And it will be 273 . in practice we may have more than one dependant variable.C H A P T E R 15 Multi-response Optimization Problems 15.1 Experiment Heat treatment of machined parts Injection moulding of polypropylene components Optimization of face milling process Examples of multi-response experiments Dependant variables/Responses Surface hardness Depth of hardness Tensile strength Surface roughness Volume of material removal Surface roughness and Height of burr formed Left profile error Right profile error Left helix error and Right helix error Gear hobbing operation Taguchi method cannot be used directly to optimize the multi-response problems.1. In this chapter we will discuss some of the methods which can be easily employed. In multi-response problems if we try to determine the optimal levels for the factors based on one response at a time. only one dependent variable (response) is considered and the optimal levels for the parameters are determined based on the mean response/maximum of mean S/N ratio. each response can be the original observed data or its transformation such as S/N ratio. Also engineering judgment together with past experience will bring in some uncertainty in the decision making process. Reddy et al. (2005) have presented a literature review on solving multi-response problems in the Taguchi method.3 ASSIGNMENT OF WEIGHTS In assignment of weights method. Phadke (2008) has used Taguchi method to study the surface defects and thickness of a wafer in the polysilicon deposition process for a VLSI circuit manufacturing.2 ENGINEERING JUDGMENT Engineering judgment has been used until recently to optimize the multi-response problem. 5.274 Applied Design of Experiments and Taguchi Methods difficult to select the best set. the following methods are discussed: 1.1) This (W ) is termed Multi Response Performance Index (MRPI). Using this MRPI. the major issue is the method of determining the weights. validity of experimental results cannot be easily assured. the general approach in these problems is to combine the multi responses into a single statistic (response) and then obtain the optimal levels. some trade-offs were made to choose the optimal-factor levels for this two-response problem. Engineering judgment Assignment of weights Data envelopment analysis based ranking (DEAR) approach Grey relational analysis Factor analysis (Principal component method) Genetic algorithm 15. Most of the methods available in literature to solve multi-response problems address this issue. (1998) in their study in an Indian plastic industry have determined the optimal levels for the control factors considering each response separately. the multi-response problem is converted into a single response problem. the problem is solved as a single response problem. 2. Jeyapaul et al. If there is a conflict between the optimum levels arrived at by different response variable. where W = W1 R 1 + W2 R 2 (15. Let W1 be the weight assigned to. They have selected a factor that has the smallest or no effect on the S/N ratio for all the response variables but has a significant effect on the mean levels and termed it mean adjustment/signal factor. Each experimenter can judge differently. They have set the level of the adjustment factor so that the mean response is on target. Based on the judgment of relevant experience and engineering knowledge. The sum of the weighted response (W) will be the single response. 3. . Usually. the authors have suggested the use of engineering judgment to resolve the conflict. In this approach. In the multi-response problem. Suppose we have two responses in a problem. say the first response R1 and W2 be the weight assigned to the second response R2. 4. In this chapter. 6. 15. Literature review indicates that several approaches have been used to obtain MRPI. By human judgment. 2(a)].2710 1. It was identified that tensile strength and surface roughness have been the causes for the defects. 1/SR is obtained and then W SR is computed. Also note that TS is larger—the better type of quality characteristic and SR is smaller—the better type of quality characteristic. The following factors and levels were selected for study [Table 15. TABLE 15. TABLE 15.1 Levels 2 265 6 55 3 280 9 80 The experiment results are given in Table 15.4511 0.3892 0.9785. Note that there are two responses.1190 and WSR1 = = 0.1225 9031 20.2(b).3736 1.2(a) Factors 1 Inlet temperature (A) Injection time (B) Injection pressure (C) 250 3 30 Factors and levels for Illustration 15. in the first trial WTS1 = 1075 2.5694 = 0. for each response data.6127 0.Multi-response Optimization Problems 275 ILLUSTRATION 15. the components were rejected due to defects such as tearing along the hinge and poor surface finish. the individual response (data) is divided by the total response value (S TS).9785 . That is. One is the Tensile Strength (TS) and the second is the Surface Roughness (SR). reverse normalization procedure is used.9640 0. From Table 15.2(b).3397 0.1 Factors B 1 2 3 1 2 3 1 2 3 A 1 1 1 2 2 2 3 3 3 C 1 2 3 2 3 1 3 1 2 TS 1075 1044 1062 1036 988 985 926 968 957 SR 0.2910 0.2(b) Trial no. For TS (larger—the better characteristic). Note that WTS1 = 6 TS TS1 and WSR1 = (1/SR1 ) 6 1/SR For example. S TS = 9041 and S 1/SR = 20.1 In an injection moulding process. 1 2 3 4 5 6 7 8 9 Experimental results for Illustration 15. In the case of SR (smaller—the better characteristic).1577 The weights are determined as follows. 1189 ´ 1075 + 0.1024 0.4. TABLE 15. optimal levels are identified based on maximum MRPI values.7204 101.3.8701 103. TABLE 15.8652 120. .1189 0.4511 0.0494 0.276 Applied Design of Experiments and Taguchi Methods (MRPI)i = W 1Y11 + W2Y12 + .2710 1.8652 120.3892 0.1071 0.0778 0.4127 94.8652 The weights and MRPI values for all the trials are given in Table 15.1) Factors B 1 2 3 1 2 3 1 2 3 A 1 1 1 2 2 2 3 3 3 C 1 2 3 2 3 1 3 1 2 MRPI 127.3397 0. The original problem with the MRPI score is given in Table 15.2168 2.1057 0.9438 1.8367 118.1175 0.1225 ´ 0.1090 0.7867 0. + W jYij (MRPI)i = MRPI of the ith trial/experiment Wj = Weight of the j th response/dependant variable Yij = Observed data of ith trial/experiment under jth response MRPI1 = (0..4 Trial no.2983 (15.1403 0.1155 0. Since MRPI is a weighted score.5.8701 103.1225 0.9640 0.7204 101.5694 2.3736 1.3023 MRPI 127.4) as a single response of the original problem and obtain solution using methods discussed in Chapter 12.2983 The level totals of MRPI are given in Table 15.3 Trial 1 2 3 4 5 6 7 8 9 TS 1075 1044 1062 1036 988 985 926 968 957 Weights and MRPI values for Illustration 15.2910 0.0356 107.3892) = 127.0356 107.1276 0.1093 0.2) Now.1 WTS 0.8367 118.3411 WSR 0.6767 0.1058 SR 0.6321 1.6297 124.1577 1/SR 2.7746 6.6696 108.6696 108.6297 124.0369 0. we consider MRPI (Table 15.0373 2.6127 0.0375 0..1145 0. 1 2 3 4 5 6 7 8 9 MRPI as response (Illustration 15.4127 94. Volume of material removal (Larger—the better type quality characteristic) 2.8 5. (15.1.2 Material Removal (MR) 1 2 160 151 152 111 95 112 97 77 97 110 145 120 149 140 91 77 79 81 Surface Roughness (SR) 1 2 7.Multi-response Optimization Problems 277 TABLE 15.53 2.8 4. .1179 332.7 8.60 2.3316 341.98 4. ILLUSTRATION 15.98 The data analysis using weightage method is presented below.8888 333. For MR. Burr height (Smaller—the better type quality characteristic) Note that there are two replications under each response.94 2.2 6. While computing the weights the absolute value of data is considered.5477 327.8 4.5 Factors Level totals of MRPI for Illustration 15.21 3.73 3. 1 2 3 4 5 6 7 8 9 A 1 1 1 2 2 2 3 3 3 Factors B C 1 2 3 1 2 3 1 2 3 1 2 3 2 3 1 3 1 2 Experimental results for Illustration 15.50 3. Surface roughness (Smaller—the better type quality characteristic) 3.5976 1 Inlet temperature (A) Injection time (B) Injection pressure (C) 373.8 7.93 2.8 6.84 2.6) on the following three responses/dependent variables from a manufacturing operation: 1. the data is transformed into the S/N ratios and is given in Table 15.79 3.7. TABLE 15. The weights are determined as explained in Illustration 15.8. The weights obtained and the MRPI values are given in Table 15.50 3. MRPI is computed using Eq.1 Levels 2 334.78 2.5 9.00 3.7424 The optimal levels are selected based on maximum MRPI are A1. B1 and C2.0 9.67 2.5 8. In the case of SR and BH.20 3. reverse normalization procedure is used.6 Trial no.7 Burr Height (BH) 1 2 4.6 5.2 3.6 5.2).6 4.2 4. the individual response is divided by the total response value (S MR).9983 3 299. Since two replications are available.4049 338.2 5.2 Suppose we have the following Taguchi design with data (Table 15.27 4.3857 340.15 3.3 7. 8224 –11.7 Trial no.0896 0.3552 –8.1321 –11.3998 42. This is left as an exercise to the reader. .2 W MV Weights WSR 0.0894 0.1.1169 0.2643 2.0998 0.1061 The optimal levels for the factors can now be determined considering MRPI as single response as in Illustration 15.8397 38. 1 2 3 4 5 6 7 8 9 S/N ratio values for Illustration 15.5042 –14.1159 0.1209 0.4900 41.9995 40.1980 –12.4235 –9.9651 BH –12.1292 0.0440 1.5933 –9.1905 1.2465 0.3775 –14.6858 1. 1 2 3 4 5 6 7 8 9 TABLE 15.2 MR 42. In this method.4506 –17.1297 0.2119 –19.4843 Weights and MRPI for Illustration 15.1202 MRPI ´ 103 2.1201 –17.9896 38.4883 1.1092 0.0283 1.1118 0.3522 –8.1328 0.1054 0. The following steps discuss DEAR: Step 1: Determine the weights associated with each response for all experiments using an appropriate weighting technique.1010 0.1203 0.8770 –12.7268 –10.8 Trial no.4645 –14.6178 37.1005 0.1162 W BH 0.9198 39. This ratio can be treated as equivalent to MRPI.0060 2. 15.1185 0.1574 43.0960 0.1033 0.2296 –14.9379 1. a set of original responses are mapped into a ratio (weighted sum of responses with larger—the better is divided by weighted sum of responses with smaller—the better or nominal—the best) so that the optimal levels can be found based on this ratio.278 Applied Design of Experiments and Taguchi Methods TABLE 15.1147 0.0940 0.1009 0.1391 0.8824 S/N ratio values SR –18.4 DATA ENVELOPMENT ANALYSIS BASED RANKING METHOD Hung-Chang and Yan-Kwang (2002) proposed a Data Envelopment Analysis based Ranking (DEAR) method for optimizing multi-response Taguchi experiments.1117 0.1151 0. 6807 2. 1 2 3 4 5 6 7 8 9 TS 1075 1044 1062 1036 988 985 926 968 957 Response weights for Illustration 15. Step 3: Divide the weighted data of larger—the better type with weighted data of smaller—the better type or nominal—the best type.3023 Weighted responses and MRPI for Illustration 15. .1577 WSR 0.1071 0.3 The problem considered here is same as in Illustration 15.3 WTS 0.1240 TS * WTS (P) 127.3736 1.2518 1.1057 0.2710 1.8(b) Trial no.0494 0.04762 0. we obtain Table 15.8(a) Trial no.04768 0.0369 0. The level totals of MRPI for Illustration 15.0778 0.4910 2.6127 0.8(b).9896 2.04766 0.1762 2.04766 0.2648 2.8(a).0375 0.1090 0.6177 2. The optimal levels based on maximum MRPI are A1.1058 SR 0.04768 0.1145 0. TABLE 15.3397 0.9.8175 120.1225 0. ILLUSTRATION 15.6220 107.04764 0. B1 and C2 .1.1276 0.5300 2.1155 0.3892 0.Multi-response Optimization Problems 279 Step 2: Transform the observed data of each response into weighted data by multiplying the observed data with its own weight.04767 MRPI = P/Q ´ 103 2. the application of the first step in this approach results in Table 15.1189 0.3650 94. Step 4: Treat the value obtained in Step 3 as MRPI and obtain the solution. For determining the weights. Now applying Step 2 and Step 3.1403 0.9640 0.4511 0.3 SR * W SR(Q) 0. 1 2 3 4 5 6 7 8 9 TABLE 15.6728 101.3 are given in Table 15.1024 0.5820 124.2910 0.1093 0.1.1175 0.04767 0.7850 118.2506 The optimal levels are identified treating MRPI as single response as in Illustration 15.9884 107. Note that the MRPI values are obtained by dividing the weighted response of larger—the better (TS) with the weighted response of smaller—the better quality characteristic.04767 0.8224 103. TABLE 15.2 are reproduced in Table 15.1328 0.1010 0.0881 9.0894 0.0760 31.0998 0.1009 0.6807 1.7435 0.1148 18. The weights obtained for Illustration 15.1151 0.9710 7. weighted response and MRPI for Illustration 15.7233 0.5649 9.7164 MRPI = P/(Q + R) 13.280 Applied Design of Experiments and Taguchi Methods TABLE 15.10.5480 29.1450 Factors 1 Inlet temperature (A ) Injection time (B) Injection pressure (C) 7.1092 0.1185 0.1005 0.2574 1.7427 0.1169 0. 1 2 3 4 5 6 7 8 9 Weights W SR 0. Optimization steps in grey relational analysis Step 1: Transform the original response data into S/N ratio (Yij) using the appropriate formulae depending on the type of quality characteristic.2.3014 BH * W BH ( R) 0.1297 0.1061 WBH 0.8284 7.2859 1.0940 0. the MRPI data can be analysed and optimal levels for the factors can be determined.9 Level totals of MRPI for Illustration 15.8858 SR * WSR (Q ) 1.1613 7.8220 26.4539 1.1162 MV * WMV (P ) 31.2631 1.0896 0.1147 0.2898 6.1118 0.1117 0.3918 16.10.10 are obtained by dividing the weighted response of the larger—the better type with the sum of weighted response of the smaller—the better type of quality characteristic.5456 1.0076 6.2495 22. In this approach a grey relational grade is obtained for analysing the relational degree of the multiple responses. The MRPI values given in Table 5.4 Application of DEAR approach for Illustration 15.1391 0.1203 0.1209 0.8721 ILLUSTRATION 15.7131 0.1054 0.4645 1. Now the weighted response for all the experiments is obtained as in Illustration 15.0960 0.9935 6.7106 0.5 GREY RELATIONAL ANALYSIS Grey relational analysis is used for solving interrelationships among the multiple responses.1033 0.2996 10.3 (x103) Levels 2 7.1292 0.0325 12.10 Weights.7155 0.7750 0.1676 18.3396 16.9102 13.1202 15.4 Trial no. Lin et al.1159 0.3 and is given in Table 15.1684 1. Now considering MRPI as single response.0770 35.7746 0.3596 WMV 0. (2002) have attempted grey relational based approach to solve multi-response problems in the Taguchi methods.1634 8.8395 14. .1087 3 6. . i = 1. i = 1. Zij = Normalized value for ith experiment/trial for j th dependant variable/response Zij = Yij  min (Yij . . Normalization is a transformation performed on a single input to distribute the data evenly and scale it into acceptable range for further analysis. 2. ILLUSTRATION 15.6) GCij = grey relational coefficient for the ith experiment/trial and jth dependant variable/ response D = absolute difference between Yoj and Yij which is a deviation from target value and can be treated as quality loss.. 2.5) Step 3: Compute the grey relational coefficient (GC) for the normalized S/N ratio values.. .. 2... n) (to be used for S/N ratio with smaller—the better case) (15. 2. n) max (| Yij  T |..1 as against two replications in this illustration.. the data is different in this example which is given in Table 15. i = 1. However. 2.. i = 1.. . n)  min (|Yij  T |... Gi = 1 m  GC ij (15. i = 1. . n)  Yij max (Yij . 2.. m —responses (15.... Yoj = optimum performance value or the ideal normalized value of jth response Yij = the ith normalized value of the j th response/dependant variable Dmin = minimum value of D Dmax = maximum value of D l is the distinguishing coefficient which is defined in the range 0 £ l £ 1 (the value may be adjusted on the practical needs of the system) Step 4: Compute the grey relational grade (Gi)... . 2. 2. 2.. n) max (Yij . .7) where m is the number of responses Step 5: Use response graph method or ANOVA and select optimal levels for the factors based on maximum average Gi value. i = 1.Multi-response Optimization Problems 281 Step 2: Normalize Yij as Zij (0 £ Zij £ 1) by the following formula to avoid the effect of using different units and to reduce variability.4) Zij = (| Yij  T |)  min (| Yij  T |. .. i = 1.. n) (to be used for S/N ratio with larger—the better case) (15..1. n) (to be used for S/N ratio with nominal—the best case) (15. 2.11... n)  min (Yij .... .. n)  min (Yij . .5 Application of Grey Relational Analysis The problem considered here is same as in Illustration 15. .. 2..3) Zij = max (Yij .... Note that we had data from one replication in Illustration 15.. i = 1. GCij = 'min + M'max 'ij + M'max i = 1. . i = 1. n —experiments   j = 1. 11 Trial no.2710 1. 1. S/N ratio for SR = –10 log Step 2: Normalization of S/N values Using Eqs.6127 0. 1. 1 2 3 4 5 6 7 8 9 Step 1: Factors B 1 2 3 1 2 3 1 2 3 Experimental data for Illustration 15.6362  59. Z11 = (60.3399 0.2712 1. Since tensile strength (TS) has to be maximized.9620 0. S/N ratio for TS = –10 log 1  1 1  +   = 60.4511 0. 15.1920799 2 Similarly.6127 0. 1.3) and (15.6362 (Trial no. min Yij = 59.5 Tensile strength (TS) 1 2 1075 1044 1062 1036 988 985 926 968 957 1077 1042 1062 1032 990 983 926 966 959 Surface roughness (SR) 1 2 0.12.9) for Trial no.1577 0. Applying Eq. for Trial no.12.63624543 2 2  (1075) (1077)2  1 [(0.3397 0.6362  59. Hence the S/N ratio for TS is computed from the following formula: 1 S/N ratio(h) = –10 log    n  i 1 n 1 Yij2 (15.8) The second performance measure (response) is smaller—the better type and its S/N ratio is computed from 1 S/N ratio(h) = –10 log  n   Yij2   i 1 n   (15.3892)2 + (0.4) we normalize the S/N values in Table 15. it is larger—the better type of quality characteristic.3322) = 1 (60.3736 1. (15.3322) .3896)2] = 8.3892 0.3322 (Trial no. 1).1557 A 1 1 1 2 2 2 3 3 3 C 1 2 3 2 3 1 3 1 2 Transformation of data into S/N ratios There are two replications for each response. Now the data has to be transformed into S/N ratio. for all trials the S/N ratios are computed and given in Table 15.2908 0.3896 0. Trial no.4515 0. 7) and max Yij = 60.3736 1.2910 0.282 Applied Design of Experiments and Taguchi Methods TABLE 15.9640 0.3. For TS. 2550 0.3275 6. DTS1 = 1 – 1 = 0 and for Trial no.3736 1.3657 60.6362  59.3275 6.9107 8.13 Trial no.7925 0.2712 1. Yoj = 1 for both TS and SR.9927 1 0 TS 60.5519 –2. 1 2 3 4 5 6 7 8 9 Tensile strength (TS) 1 2 1075 1044 1062 1036 988 985 926 968 957 1077 1042 1062 1032 990 983 926 966 959 S/N ratios for Illustration 15. min Yij = –2.5519 –2.3657 60.5 S/N ratios SR 8.9128 0.6362 60. .3322) = 0.6273 S/N ratio (SR) 8. 2.2710 1.8599 59.4511 0.3369 0.6362 60. 1 and TS.4384 0.3322 59. Applying Eq.3657  59.1921 9.1542 (Trial no. DTS2 = 1 – 0.6477 0.3892 0. the normalized score for other trials of SR are computed and tabulated in Table 15.9620 0.9039 59.3896 0.2910 0.4515 0.7925 = 0. 8).13.1542  (  2.7925 (60.13.7085 59.6127 0. Z82 = Z12 = 16.1557 S/ N ratio (TS) 60.3736 1.12.5 Surface roughness (SR) 1 2 0.9107 8.2178 (Trial no.4334 0.2075.1542 and Z21 = (60.1921 9.2550 0.1921) = 0.4047 0 0.3755 4.0836 –2.3399 0.1542 Normalized S/ N ratios TS SR 1 0.2178) and 16.4138 0.3397 0.8615 0.1542  (  2.2178 16.1577 0.0836 –2. For Trial no.5225 60.4334 16.7348 0.6127 0. For SR.9039 59.4).3755 4. 1 2 3 4 5 6 7 8 9 Step 3: Normalized S/N ratios for Illustration 15. TABLE 15.2178) = 1 16.2886 0. max Yij = 16.2904 59.Multi-response Optimization Problems 283 TABLE 15.2908 0.2178) Similarly.2263 0.5031 0.1542  (8.3322 59.2178 16. the normalized values for TS are obtained and tabulated in Table 15.8599 59.5225 60.9640 0.7085 59.1542  (  2. 9).3322) Similarly.12 Trial no. (15.2904 59.6273 Computation of grey relational coefficient (GCij) Quality loss (D) = absolute difference between Yoj and Yij From Table 15. 1 and TS.8192 Similarly for all the trials.6956 3 0.0872 0.3523 0.0073 0 1 GCTS 1 0.8192 0.7223 0.5953 1 0. the GCTS1 = For Trial no. TABLE 15.5 Levels 2 0.7395 0.5319 Normalized S/N ratios TS TR 1 0.6631 0.5638 GCSR 0.7922 0.6477 0.5. for both responses (TS and SR). GCTS2 = 0+1  1 = 0.7466 Mean of MRPI values for Illustration 15.8783 0.1385 0. .7348 0.7737 DSR 0. Dmax = 1 and Dmin = 0. TABLE 15.2075 0.9927 1 0.9198 0.14. 0+1  1 For Illustration 15.14 Trial no.5 DTS 0 0.4047 0 0.2886 0.4334 0.5 0. the grey grade (Gi) is equivalent to MRPI and is treated as single response problem and MRPI data is analysed to determine the optimal levels for the factors. Gi = 1 m  GCij = 1 2 (1 + 0. 1.2263 0.15.8282 0.7057 0. The main effects on MRPI (mean of MRPI) are tabulated in Table 15. These are given in Table 15.8615 0.6634 0.4384 0.8344 0.284 Applied Design of Experiments and Taguchi Methods Similarly.14.7114 0.7).6383 0.6304 0.7434 From Table 15.6).7464 0. the coefficients are 0.6383) = 0.8000 0.6131 0.4969 0.7925 0. 1 2 3 4 5 6 7 8 9 Normalized S/N ratios for Illustration 15.6268 0.9128 0.6681 0. Similarly. B1 and C1. 2 of TS.3369 0.14.5843 0. Assume l = 1.6542 0.15 Factors 1 Inlet temperature (A ) Injection time (B) Injection pressure (C) 0.5 Gi 0.8296 0. the grey grade values are computed and are given in Table 15.2652 0.14.9927 1 0 Now.6286 0. For Illustration 15.7206 0.2075 + 1  1 computed for all the trials of both TS and SR and are tabulated in Table 15.7898 0.5.4138 0. (15.7904 0.5031 0.6902 0. The grey grade values are computed from Eq. for Trial no.6404 0.5616 0. from Table 15.5666 0. it is found that the optimal levels are A1. The grey relational coefficient (GCij) is computed from the Eq.15.5862 0. we can compute the D values for all the trials and responses. For Trial no.8282. GCij = ' min + M' max ' ij + M' max 0+1  1 = 1. (15. where m is the number of sets of observations and n is the number of characteristics (input variables). Each factor is accounted for one or more input variables. 3. until the total sample variance is combined into component groups. This technique is applicable when there is a systematic interdependence among a set of observed variables.2: Perform reflections: (a) Initially assign a weight (wj) of +1 for all the columns. Principal Component Method (PCM) is a mathematical procedure that summarizes the information contained in a set of original variables into a new and smaller set of uncorrelated combinations with a minimum loss of information. etc.3.3. The second principle component accounts for the next largest amount of variance.3.2: Find n ´ n correlation coefficient matrix R1 summarizing the correlation coefficient for each pair of variables. Step 3. then assign a weight (wj) of +1 to each column j (variable j) in the matrix. They have proposed an effective procedure on the basis of PCM to optimize the multi-response problems in the Taguchi method.4. Step 3. . PCM of factor analysis is used for solving the multi-response Taguchi problems.Multi-response Optimization Problems 285 15.1: Input the data matrix of m ´ n size.6 FACTOR ANALYSIS Factor analysis is the most often used multi-variate technique in research studies. Factor analysis aims at grouping the original input variables into factors which underlie the input variables. are used to perform factor analysis.5. assign –1 as the weight for that column j . the reader may refer to Research Methodology by Panneerselvam (2004). then. Input the total number of principal components (K) to be identified. (b) If there is a negative entry in column j . The procedure for application of factor analysis is discussed as follows: Step 1: Transform the original data from the Taguchi experiment into S/N ratios for response using the appropriate formula depending on the type of quality characteristic (Section 13. otherwise go to Step 3. Step 2: Normalize the S/N ratios as in the case of Illustration 15.3). Each diagonal value of the correlation coefficient matrix is assumed as 1. treat the matrix Rk as the matrix Rk without any modification in Rk and go to Step 3. For details on factor analysis. since the respective variable is correlated to itself. Step 3: The normalized S/N ratio values corresponding to each response are considered to be the initial input for Factor Analysis.3: Perform reflections if necessary: 3.1: If the matrix is positive manifold. Statistical software is like SPPSS. and so on. This analysis combines the variables that account for the largest amount of variance to form the first principal component. MINITAB. it should be reflected. Step 3.2. Usually. Chao-Ton and Lee-Ing Tong (1997) have developed an effective procedure to transform a set of responses into a set of uncorrelated components such that the optimal conditions in the parameter design stage for the multi-response problem can be determined. …. j) Rk(j.4.14: Update Lj = Qj. Qj = Mj .13: For each j. where j = 1. 3.4: Divide each column sum. 3.8: Set i = i+1. otherwise go to Step 3. If all Lj and Qj are more or less closer.4.5. do the following updation: Rk(i. Step 3.4: Determination of kth principal component: 3.7: Multiply the value of Lj with the corresponding value in the ith row of Rk matrix for each j and sum such products and treat it as Mi.4.4. n.10. for all j = 1.2: Find the grand total (T1).4.10: Find the grand total (T2). where Mi =  R (i .5: For each column j . go to Step 3. otherwise go to Step 3. j ) L k j 1 n j 3.4. i) = Absolute value of Rk(j. …. compare Lj and Qj towards convergence. n as T2 =  j 1 n M2 j 3.6: Set row number i of the matrix Rk to 1(i = 1) 3. 3. j) = Absolute value of Rk(i. Sj by W to get unweighted normalized loading ULj for that column j.4.4.4.4.4.286 Applied Design of Experiments and Taguchi Methods In this case. 3.4.3: Find the normalizing factor.4.4. then go to Step 3. 3. 2. otherwise go to Step 3.4. 3.4. 3. where N 3. i) (c) Treat this revised matrix as the matrix R¢k which is known as the matrix with reflections. W which is given by the formula: W = T1 3.14.7.4.11: Find the normalizing factor N which is given by the formula: N = T2 3. 2.1: For each column j in the matrix R¢ k.6. which is the sum of squares of Mj. for each negative value in the row i of the column j . which is the sum of squares of the column sums: T1 =  Sj 2 j =1 n 3.12: Divide each Mj by N to get normalized loadings Qj. find the sum of the entries in it (Sj).9: If i £ n. 3.4. find its weighted factor loading Lj by multiplying ULj with its weight wj. is defined as: Ei = max(MRPIij) – min(MRPIij) If the factor i is controllable.11: Find the sum of squares of loadings of each column j (principal component j) which is known as eigenvalue of that column j .12: Drop insignificant principal components which have eigenvalues less than one.8. Step 3.10) Step 5: Determine the optimal factor and its level combination. (15. on the basis of performance index.5.9: Go to Step 3.2: Find the residual matrix Rk by element-by-element subtraction of the cross product matrix Ck from the matrix Rk–1.9) and are tabulated in Table 15.Multi-response Optimization Problems 287 Step 3.12) (15. therefore. Step 3. to estimate the effect of factor i.8) and (15.11) .10. the best level j*. The higher performance index implies the better product quality.7: If k > K then go to Step 3. calculate the average of multi-response performance index values (MRPI) for each level j .10: Arrange the loadings of K principal components by keeping the principal components on columns and the variables in rows.1: Determination of cross product matrix (Ck): Obtain the product of each pair of factor loadings of (k – 1)th principal component and store it in the respective row and column of the cross product matrix (Ck). etc. then the effect. MINITAB. Step 3.4. as shown below.8. the factor effect can be estimated and the optimal level for each controllable factor can also be determined. Step 4: Calculate Multi Response Performance Index (MRPI) value using principal components obtained by FA. is determined by j* = maxj(MRPIij) Step 6: Perform ANOVA for identifying the significant factors. The S/N ratio values are calculated for appropriate quality characteristic using Eqs. 3. (15. For Example. Step 3. Latest normalizing factor for Q in Step 3. denoted as MRPIij. Pik = Li N Step 3.11 of the last iteration of convergence = N.6: Increment k by 1 (k = k + 1) Step 3. The option of performing factor analysis is available in statistical software (SPSS.13: The principal component values with eigenvalue greater than one corresponding to each response is considered for further analysis.). MRPIi = P1 Y11 + P2Y12 + … + PjYij (15.8: Determination of Rk: 3.3. Step 3. Ei.6 The problem considered here is same as in Illustration 15. ILLUSTRATION 15.16. Step 3. Then.5: Store the loadings of the kth principal components Pik. 17 Exp.3399 0.734795 0.375533 4. 1 0.17.70852948 59.861457 0.962 0.766 1.8599 59. no.912766 0.2712 1.6 Tensile strength (TS) 1 2 1075 1044 1062 1036 988 985 926 968 957 1077 1042 1062 1032 990 983 926 966 959 Surface roughness (SR) 1 2 0.327474 6.7085 59.404656 0 0.226292 0.3892 0.32747 6.29041078 59.50313 0.15423 From the data in Table 15.551863 –2.36568617 60.9039 59. Here the components that are having eigenvalue more than one are selected for analysis.3736 1.174 Responses TS SR Eigen values .4511 0.288575 0.3896 0.16 Exp. The component values obtained after performing factor analysis based on PCM are given in Table 15.3736 1. (15.6 Normalized S/N ratio values TS SR 1 0.271 1.08359 –2.6 Comp.2904 59.192081 9.910693 8.19208 9.647678 0. TABLE 15.3322 59.63624543 60.0836 –2.18 Component matrix for Illustration 15.3397 0.6273 8.288 Applied Design of Experiments and Taguchi Methods TABLE 15.438416 0.2179 16.6127 0.766 –0.52249033 60.1542 Control factors A B C Error 1 1 1 2 2 2 3 3 3 1 2 3 1 2 3 1 2 3 1 2 3 2 3 1 3 1 2 1 2 3 3 1 2 2 3 1 The normalized S/N ratio values are computed using Eqs.1557 0.368967 0.6362 60.79252 0.4138 0. 1 2 3 4 5 6 7 8 9 Normalized S/N ratio for Illustration 15.37553 4. factor analysis is performed based on principal component method.4515 0.3657 60.433383 0.85990197 59.18.2908 0.33221973 59. 1 2 3 4 5 6 7 8 9 Experimental results and S/N ratio for Illustration 15.964 0.291 0.3) and (15.6127 0.21785 16.91069 8.255042 0.62731018 8.25504 0.5225 60.55186 –2.17. no.4) and the same is given in Table 15.1557 S/N ratio values TS SR 60.90392583 59. TABLE 15.992692 1 0 S/N ratio values TS SR 60. 2 –0.335891 1 0.50313 0. TABLE 15.766 Zi1 – 0. The MRPI values are computed and are tabulated in Table 15.007 –0.09702 –0.1 shows plot of factor effects.3 0.04957 –0.20 Factors Inlet temperature (A) Injection time (B) Injection pressure (C) 0.5 Factor levels A1 A2 A3 B1 B2 B3 C1 C2 C3 Main effects on MRPI 2 –0.861457 0.123131 –0.17334 Table 15. the optimal condition is set as A1. Consequently.697847 0.434029 0.1 0 –0.09003 0.14113 –0.766 Zi2 where Zi1.368967 0.1 –0.433383 0.647678 0.288575 0.03931 FIGURE 15.404656 0 0.3 –0.203058 –0.264263 0.992692 1 0 MRPI 1 2 3 4 5 6 7 8 9 0.133586 3 –0.54495 0. TABLE 15. The larger MRPI value implies the better quality.324441 0.320509 –0.4 –0.7604 –0. B3 and C2.19. no. Normalized S/N ratios and MRPI values Normalized S/ N ratio values TS SR 1 0.37734 0. MRPIi1 = 0.912766 0. B. and Zi2 represent the normalized S/N ratio values for the responses TS and SR at ith trial respectively. .734795 0. The controllable factors on MRPI value in the order of significance are A.226292 0.1 Factor effects on MRPI.20 summarizes the main effects on MRPI and Figure 15. the following MRPI equation is obtained.0512 –0.2 MRPI values 0.2023 Max–Min 0. and C.4 0.19 Exp.4138 0.79252 0.18.Multi-response Optimization Problems 289 Using the values from Table 15.438416 0. The result of the pooled ANOVA shows that inlet temperature is the significant factor with a contribution of 58.084641 0. bin packing problem.1 Initialization: Randomly generate an initial population of Ps strings.3 to generate Ps strings with the pre-specified crossover probability Pc. TABLE 15.4 Crossover: Apply the pre-specified crossover operator to each of the selected pairs in Step 3.21 Factors A C Error Total Sum of squares 0. scheduling. (2005) have applied GA to solve multi-response Taguchi problems.3 Selection: Select a pre-specified number of pairs of strings from the current population according to the selection methodology specified. Step 1: Transform the original data from the Taguchi experiment into S/N ratios for each type of response using the appropriate formula depending on the type of quality characteristic (Section 13.365762 0. The GA approach adopted in the robust design procedure is explained as follows: Step 3.37506 Degrees of freedom 2 2 4 8 15.997108 % contribution 58. where Ps is the population size.97%.977 13.169283 0.7 GENETIC ALGORITHM Genetic Algorithms (GAs) are search heuristics used to solve global optimization problems in complex search spaces. etc. Jeyapaul et al.240355 Results of the pooled ANOVA Mean squares 0.3). The detailed discussion and applications can be found in Goldberg (1989 and 1998).731524 0.308813 0. Then transform the value of the objective function for each solution to the value of the fitness function for each string. (Yenlay 2001). The steps of GA to solve multi-response problems in Taguchi method is given below.290 Applied Design of Experiments and Taguchi Methods The ANOVA on MRPI values is carried out and results are given in Table 15.2 Evaluation: Calculate the value of the objective function for each solution. The objective of this GA is to find the optimal weights so as to maximize the normalized S/N ratio. Step 3: The normalized S/N ratio values are considered for calculating the fitness value.64794 27. GA has been applied successfully for solving problems combinatorial Optimization problems such as Travelling salesman problem. Step 3.084887 F 4.5.21. Step 2: Normalize the S/N ratios as in the case of Illustration 15. Here the weights are considered as genes and the sum of the weights should be equal to one. Step 3.339548 1. GAs are intelligent random search strategies which have been used successfully to find near optimal solutions to many complex problems. Step 3.. vehicle routing problem. . to estimate the effect of factor i .2.Multi-response Optimization Problems 291 Step 3. For Example. the factor effect is estimated and the optimal level for each controllable factor is determined. w2. …. therefore. FIGURE 15. return to Step 3. Step 3. The GA approach is depicted in Figure 15.wj) obtained from GA WSNi = W1Z11 + W2Z12 + … + WjZij (15.2 Schematic representation of GA. denoted as WSNij.6 Termination test : If a pre-specified stopping condition is satisfied. calculate the average of weighted S/N ratio values (WSN) for each level j. stop this algorithm.13) Step 5: Determine the optimal level combination for the factors. on the basis of weighted S/N ratio.2.14) .5 Mutation: Apply the pre-specified mutation operator to each of the generated strings with the pre-specified mutation probability Pm. is defined as: Ei = max(WSNij) – min(WSNij) (15. then the effect Ei. Otherwise. Maximization of weighted S/N ratio results in better product quality. Step 4: Calculate weighted S/N ratio value using the weights (w1. ..05 0. which results in fixed values for all generations. The dotted line indicates the single crossover point. Here we use the single point crossover which is the most basic crossover operator where a single location is chosen and the offspring gets all the genes from the first parent up to that point and all the genes from the second parent after that point. Zij = normalized S/N ratio values.10] and the sum of the genes should be one. n = number of experiments under each response. the best level j*. The objective of this algorithm is to find the optimal weights to the responses to maximize the normalized S/N ratio. initial populations of 20 chromosomes are generated subject to a feasibility condition. and k = number of responses. Here the weights are considered as the genes. the sum of weights should equal one.34 0. the chromosome will be like [0.21 0. This selection probability can depend on the actual fitness values and hence they change between generations or they can depend on the rank of the fitness values only.e. Crossover: This is a genetic operator that combines two parent chromosomes to produce a new child chromosome.292 Applied Design of Experiments and Taguchi Methods If the factor i is controllable. Evaluation: The evaluation is finding the total weighted S/N ratio for the generated population. Selection: Selection in evolutionary algorithms is defined by selection probabilities or rank which is calculated using the fitness value for each individual within a population.15) The implementation of genetic algorithm is explained as follows: Initial population: In this algorithm. i.30 0. Wj = weights for each response. The objective function value for the problem is given f( x) =  j 1 i 1 k n W j Z ij (15.16) where f(x) = total weighted S/N (WSN) ratio to be maximized. The implementation of genetic algorithm (15. The number of genes in a chromosome is equal to the number of the responses. initialization that is initial population generation is often carried out randomly. Consider the following two parents which have been selected for crossover. If we take five responses. is determined by j* = maxj(WSNij) Step 6: Perform ANOVA for identifying the significant factors. 0673 0. The normalized S/N ratio values are computed using Eqs.6692 0.83992 0.1217 1. The mutation can be thought of as natural experiments.Multi-response Optimization Problems 293 For Example: Parent 1 0. For Example: Before mutation: Offspring 1 0. .16.0544 Mutation: This allows new genetic patterns to be introduced that were not already contained in the population. The problem considered here is same as in Illustration 15.6692 0.09142 Feasible condition 1 Termination condition When the iteration meets the 10.83992 0.4) and the same is given in Table 15.8125 0.8) and (15.8125 0. 2635 0.0673 1 0. 0658 Parent 2 0. (15.0658 Offspring 2 0. somewhat random. sequence of genes into a chromosome.9456 0. the program comes to stopping condition.3) and (15. (15.07485 0.1217 Feasible condition 1 Offspring 1 Feasible condition 0.08543 Feasible condition 1 After mutation: Offspring 1 0.9) and are tabulated in Table 5.5. The S/N ratio values are calculated for appropriate quality characteristic using Eqs.06886 0.17.000 number of iteration.2635 0. These experiments introduce a new. 662622 0.37035 0.354351 0.22 Exp.294 Applied Design of Experiments and Taguchi Methods By applying GA procedure to the normalized S/N ratios.302927 .50313 0.000739 1 2 3 4 5 6 7 8 9 Table 15.407558 0. represent the normalized S/N ratio values for the responses TS and SR at ith trial respectively.713638 Max–Min 0.734795 0. the following result is obtained as optimal weights corresponding to each response.288575 0. The WSN values are computed and are listed in the last column of Table 15. The max-min column in Table 15.3 shows the plot of the factor effects. So the optimal condition is set as A3.861457 0.647678 0.404656 0 0.41377 0.177914 0.433383 0. TABLE 15. The WSN equation is: WSNi1 = 0.592577 0. the WSN values are considered for optimizing the multi-response parameter design problem.648543 0.4138 0.761909 0. C and A.24) gives that there is no significant factor.997677 0. The optimal weights are [0.484709 0.22. B1 and C3.992692 1 0 WSN 0. The controllable factors on WSN value in the order of significance are B.23 gives the main effects on WSN and Figure 15.23 Factors Inlet temperature (A) Injection time (B) Injection pressure (C) 1 0.410711 3 0.79252 0.502918 0.003256 Zi1 + 0. no Normalized S/N ratios and WSN values Normalized S/ N ratio values TS SR 1 0.623649 0.989451 0. TABLE 15.368967 0.912766 0.61556 Main effects on WSN values 2 0. The results of the ANOVA (Table 15.996735].226292 0. The larger WSN value implies the better quality.438416 0.23 indicates the order of significance factors in affecting the process performance.435233 0. 0.003256.861043 0. Finally.996735 Zi2 where Zi1 and Zi2. Multi-response Optimization Problems 0.1 Six main factors A.5 0.9 WSN values 0.4 0.15 2.99 4.25). TABLE 15. B.25 Trial no.458728 0.859816 Results of the pooled ANOVA Mean squares 0.8 3.88 11.33 2. Response 1 is smaller—the better type of quality characteristic and Response 2 is larger— the better type of characteristic.41 21.07 Response 2 1 12.39 14.66 13.29 13.3 0. D.24 Factors B C Error Total Sum of squares 0. TABLE 15. E and F and one interaction AB has been studied and the following data have been obtained (Table 15.143347 0.15 4.6 0.123722 0.7 0.00 19.64 4.1 0 A1 A2 A3 B1 B2 B3 C1 C2 C3 295 Factor levels FIGURE 15.70 20.3 Factor effects on WSN values.257741 0.72 3. Note that there are two responses.35185 Degrees of freedom 2 2 4 8 PROBLEMS 15.22 5.27 2 13.95 13.53 5.13 3.624977 % Contribution 29.87 6. C.58 13.114682 F 1.55 10.74 4.02 2.27 2 3.35 4.071674 0.97632 16. 1 2 3 4 5 6 7 8 A 1 1 1 1 1 2 2 2 2 B 2 1 1 2 2 1 1 2 2 Factors/ Columns AB 3 1 1 2 2 2 2 1 1 C 4 1 2 1 2 1 2 1 2 D 5 1 2 1 2 2 1 2 1 E 6 1 2 2 1 1 2 2 1 F 7 1 2 2 1 2 1 1 2 Data for Problem 15.2 0.57 .67183 53.60 19.70 10.8 0.1 Response 1 1 4.128871 0.11 22.20 18. 5 12.2 76.2 78.1 and determine optimal levels for the factors.9 15.6 13.0 73.4 11.0 76. TABLE 15.5 11.8 75.8 12. 15. Apply grey relational analysis and determine optimal levels for the factors.6 11.8 76. And determine the optimal levels for the factors.7 12.9 12. 15.2 75.0 78.8 74.7 12.0 72.2 10.0 12. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 A 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 B 2 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 C 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 Factors/ Columns D 4 1 2 3 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 E 5 1 2 3 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 F 6 1 2 3 2 3 1 3 1 2 2 3 1 1 2 3 3 1 2 G 7 1 2 3 3 1 2 2 3 1 2 3 1 3 1 2 1 2 3 H 8 1 2 3 3 1 2 3 1 2 1 2 3 2 3 1 2 3 1 Data for Problem 15.9 78.0 11.0 72.5 11.0 79.6 11. .5 10.5 Consider the data given in L18 OA (Table 15.7 78.8 13.8 11.0 11.1 11.2 13. 15.6 75. Apply Data Envelopment Analysis-based Ranking Approach to determine the optimal levels for the factors.1 78.8 76.6 78.8 10.7 12.0 Roughness 1 2 11.7 10.9 11. The objective is to maximize the strength and minimize the roughness.2 75.3 10.5.9 75.6 75.1 10. 15.0 77.1.8 76.5 13.4 Consider data in Problem 15.1 and apply Data Envelopment Analysis-based Ranking Method to determine optimal levels to the factors.0 72.0 71.4 13.0 73.9 69.6 11.7 10.7 11.7 12.6 75.6 Consider data in Problem 15.1 70.4 12.3 Consider Problem 15.26 Trial no.2 Apply grey relational analysis to Problem 15.4 76.5 Responses Strength 1 2 76.6 75.1 79.5 11. Assume that Response 1 is nominal—the best type and Response 2 is larger—the better type and apply grey relational analysis to obtain optimal levels for the factors.0 75.0 75.5 11.1 77.26).296 Applied Design of Experiments and Taguchi Methods Analyse the data by an appropriate weighting technique. BF and EG are suspected to have an effect on the yield. Identification of factors The factors that could affect the process yield have been identified through brainstorming are: (i) Raw material type.1 MAXIMIZATION OF FOOD COLOR EXTRACT FROM A SUPER CRITICAL FLUID EXTRACTION PROCESS This case study pertains to a company manufacturing natural colours. Hence it was decided to optimize the process parameters using Taguchi Method. The purpose of chiller temperature (–10° C to +10°C) is to keep carbon dioxide gas in liquid state and thus this may also not affect the yield. Annato seed is one of the inputs from which food colour is extracted using Super Critical Fluid Extraction process (SCFE). From the initial discussion with the concerned executives. 297 . The factors and their levels are given in Table 16. (iii) Raw material size. (v) Flow rate. any increase in the yield would benefit the company. After discussing with the technical and production people concerned with the process. flavours and oleoresins from botanicals. (ii) Pressure. the two levels for each factor proposed to be studied are selected. The present level of average yield is 3.1. So. the colour value (quality or grade) and percentage of bio-molecules in the yield. the objective of this study is to maximize the yield through parameter design.C H A P T E R 16 Case Studies 16. In addition to the main factors. Due to confidentiality. the interactions BE. Hence the first seven factors listed above have been considered in this study. (vii) Valve temperature. (iv) Time. SCFE is a non-chemical separation process operating in a closed system with liquid carbon dioxide gas at high hydraulic pressure and normal temperatures. the levels for the factors A. (vi) Oven temperature. (viii) Carbon dioxide gas purity. Since the yield (output or extract) from SCFE is directly used in the colour formulation. Three important issues related to the natural colour extraction process are the yield.8% of input material. The company uses only high purity carbon dioxide gas and hence this factor may not have any effect on the yield. Levels of the factors: The present level settings have been fixed by experience. it is learnt that the values for the process parameters have been arrived at by experience and judgement. B and C are coded and given. and (ix) Chiller temperature. + (2. the colour value and the percentage of bio-molecules present could be used as response variables.19)2 = 443.5 hr 2 units 80°C 90°C Proposed setting Level 1 Level 2 Type 1 X2 M1 0. (F) Valve temp.9455 32 SSTotal = (4. Grand total = 119.5–1.7318 .70)2 – CF = 487. M2 ..19)2 + . Sixteen trials have been conducted randomly each with two replications. It is computed as follows: X Y Percentage yield =    100  W  where X = final weight of the collection bottle Y = initial weight of the collection bottle W = total weight of raw materials Weight of raw material has been maintained at a constant value in all the experiments.3. Triangular table has been used to determine the assignment of the main factors and interactions to the OA. The layout of the experiment with the response (yield in percentage) is tabulated in Table 16.6773 – 443.75 hr 2.9455 = 43. However..2.00)2 + (3.0 hr 1.19 Total number of observations = 32 Correction factor (CF) = (119.0 hr 3.93)2 + (6. M3 0. The percentage yield.298 Applied Design of Experiments and Taguchi Methods TABLE 16. The assignment of factors and interactions are given in Table 16.0 100°C 120°C Operating range AP and TN 350–700 bar M1 .1 Factors Seed type (A) Pressure (B) Particle size (C) Time (D) Flow rate (E) Oven temp. Data analysis: The response totals is provided in Table 16.5–3 units 70°C–110°C 90°C–130°C Response variable: The SCFE process is being used for colour extraction from annatto seeds. in this study percentage yield has been selected as the response variable as per the objective. (G) Process factors and their levels Present setting TN X1 M2 0. Simple repetition has been used to replicate the experiment.47)2 + (3.5 70°C 100°C Type 2 X3 M3 1. Hence L16 OA has been selected.2. Design of the experiment and data collection: There are seven main factors and three interactions to be studied and have 10 degrees of freedom between them. 53 4.67 5.63 1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 B 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 E 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 Layout of the experiment and the data for Case 16.00 3.5.94)2  CF + 16 16 = 443.52 BE 59.37 2.25 59. Also.20 Y2 4.68 47.33 3.07 BF 61.67 4.19 SSA = 2 A1 A2 + 2  CF nA1 nA 2 = (59.25)2 (59.93 6.97 3.4.27 2.00 59.10 2.20 2.9604 – 443. From the initial ANOVA (Table 16. no.03 5.Case Studies 299 TABLE 16.62 C 71.03 4.03 2.65 G 57.3 Response totals for Case 16.13 4. it can be seen that the main effects B.47 4.60 5.70 3.43 2. and F are significant.08 60.80 2.1 Factors Level 1 Level 2 A 59.94 B 62.0149 Similarly.67 61.26 2.83 3.1 Columns/ Effects BE 3 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 G 4 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 EG 6 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 A 8 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 C 10 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 D 12 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 F 14 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 BF 15 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 Response Y1 4. E.06 4.19 3. the sum of squares of all effects are computed and tabulated in Table 16.70 TABLE 16.54 64. D.68 E 68.92).51 D 53.4). C.57 56.51 65.92 50.2 Trial. the interaction EG seems to have some influence (F = 3.47 3.30 4.12 57.93 3.03 4. The other effects are pooled into the error term and the final ANOVA is given in Table 16. From .33 4.27 F 54.11 EG 62.9455 = 0. 8695 3.58 24.1 Degrees of freedom 1 1 1 1 1 1 25 31 Mean square 1.82 11.7318 ANOVA (initial) for Case 16.82 0.08 C(%) 2.0332 0.00 F 0.54 41.92 0.1 Degrees of freedom 1 1 1 1 1 1 1 1 1 1 21 31 Mean square 0.7970 4.08). Optimal levels: The optimal levels are selected based on the average response of the significant factors given in Table 16.52 23.8695 3.07 5.25 = 4.79* 53.75 10.68 16. the interaction EG seems to have some influence (F = 4. the interaction EG is not significant at 5% level of significance.05.1941 0.1.4 Source of variation A B C D E F G BE EG BF Error Total Sum of squares 0.7970 0.2559 4.6284 10.08 1.1.4632 0. .1063 18. *Significant at 5% level.7970 0.24.2559 4.7970 0.5 are significant.2032 F0 0.0149 1.300 Applied Design of Experiments and Taguchi Methods the final ANOVA (Table 16.1024 4.75 10.1063 18.6284 10.28 0. And all the main effects given in Table 16.53 41. TABLE 16.1063 18.05.77 100.6284 10.1941 0.2669 43.8695 3.2559 4.1024 0.16 3.5 Source of variation B C D E F EG Pooled error Total Sum of squares 1.1063 18.36 4.8806 43.06 0.0332 0. However.23 9.6284 10.72* 2.1941 0.5).16 100.4632 0.1952 F0 5.71 55.50 C(%) 0.84* 22.58 24.1941 0.03 2.85 7. Hence this interaction has been considered while determining the optimal levels for the factors. D.21 = 4.30 1.7318 ANOVA (final) for Case 16.2559 4.67 93.30 1.44* 89.00 Since F0.8695 3.32. TABLE 16.6. it is found that only the main effects B.49* 15.0149 1. C. E and F are significant.85 7. 04) – 3(3.7 Interaction Response total Average response Response totals for Case 16.94 – 11.1 C 4. So. the response totals for the four combinations and their average is given in Table 16.16 = 5. these four are considered. D.42 Since the objective is to maximize the yield.8. TABLE 16. for predicting the optimal yield.91 2.11 E 4.1) = (4.36 3.35 E 1G 2 34.72 32 C1 = 4. TABLE 16.1 Level recommended Type 1 or Level 1 = Level 1 = Level 2 = Level 1 = Level 2 = Level 1 = Type 2 X2 M1 1.7.19 = 3.1 E 1G 1 34.54 For the interaction (EG).0 hr 2.11.14 F 3.86 E2 G2 27.78 .72) = 16.41 4. D2 = 4. E and F together contribute about 85% of the total variation.11 + 4. E1 = 4.Case Studies 301 TABLE 16.6 Factors Level 1 Level 2 B Average response for Case 16.04 N Pred = Y + (C1  Y ) + (D2  Y ) + (E1  Y ) + (F2  Y ) = C1 + D2 + E1 + F2  3Y (16.31 3.8 Factors Seed type (A) Pressure (B) Particle size (C) Time (D) Flow rate (E) Oven temperature (F) Valve temperature (G) Optimal levels recommended for Case 16. The optimal levels recommended are given in Table 16.31 + 4. Y = 119.27 E2G 1 22.97 D 3.48 + 4. the optimal condition is B1C1D2E1F2G1.31.04 3.34 4.5 units 100°C 100°C The factors C. F2 = 4.91 3.16 4.48.48 2.76 4. The average rejection rate in the disc pad was observed to be about 1. 16. The problem was the occurrence of relatively high number of rejections in the disc pad module. crack and soft cure are considered.1 Pareto analysis of defects in disc pad. After experimentation.2%.1.2 AUTOMOTIVE DISC PAD MANUFACTURING This case has been conducted in a company manufacturing automotive disc pads used in brakes.. Source: Krishnaiah. the yield has been increased by about 52% from the present level. which is almost same as mPred.82%. K.8%. the average yield has been found to be 5. the two major defects namely. no.8%. vol. and Rajagopal. Maximization of food colour extract from a super critical fluid extraction process—a case study. Conclusion: The present yield of the colour extraction from the SCFE process is 3. Journal of Indian Institution of Industrial Engineering..2. it is necessary to further reduce the rejections in today’s world of ‘zero ppm’.302 Applied Design of Experiments and Taguchi Methods Confirmation experiment: A confirmation experiment at the recommended levels has been run. Objective of the study: The objective of the study was to reduce the rejections in the disc pad module of the company. In order to reduce the rejections. it is evident that the major defects affecting the disc pad are the crack and the soft cure shown in Figure 16. K. From three replications with Type 1 seed and three replications from Type 2 seed.1. From Figure 16. . In order to know which type of defect contribute more to the defectives. 1. the levels have been changed and with these levels the yield has been increased to about 5. Pareto analysis had been performed. Though it might seem to be less. 3000 August 2500 September Number of rejections 2000 October 1500 1000 500 0 Soft cure Crack Damage Clot Logo Layer Others Defects type FIGURE 16. January 2001. The Pareto chart is shown in Figure 16. thereby improving the productivity of the disc pad module. Thus. XXX. the team of the quality and manufacturing engineers and the experimenter decided to have the following factors for study.Case Studies 303 Crack (An imperfection above/below the surface of the cured pad) FIGURE 16. The cause and effect diagram for the two types of defects is shown in Figures 16. MATERIAL Perform thickness variation Improper paranol spay Deviation from SOP Misalignment MEN Defective preform Plate profile Preform compactness SOFT CURE Reduced temp Lesser pressure High temperature Punch die crack Punch die alignment Lesser curing time MACHINE METHOD Paranol mix ratio Improper mixing FIGURE 16. .3 and 16. Identification of factors and their levels: After analysing the cause and effect diagrams.2 Soft Cure (Presence of soft areas in the cured pad) Major defects of disc pad. and proposed levels for the factors are given in Table 16.10 respectively.4.3 Cause and effect diagram for the defect soft cure.9 and 16. The levels were also fixed in consultation with the engineers. Factors and their current levels. of vents Paranol mix ratio TABLE 16.65 s 3 1:10 . of vents Paranol mix ratio Factors and their current levels Operating ranges 143°C–156°C 165–175 kgf/cm² 3.0 s 4.5 s– 4.1 s 3.0 s 3 1:15 150°C–156°C 175 kgf/cm² 4.5 s 4.5 s 4 1:20 Current levels 150°C 175 kgf/cm² 4.5 s 3. TABLE 16.9 Factors Temperature Pressure Close time Open time No.5 s– 4.10 Code A H C D E G Factors Temperature Pressure Close time Open time No.5 s 2– 4 1:10–1:20 Proposed levels for the factors Levels 143°C–149°C 165 kgf/cm² 3.304 Applied Design of Experiments and Taguchi Methods MATERIAL Moisture in mix Improper paranol spay Improper placement Misalignment MEN Defective preform CRACK Vent gap Holding time Breathing cycle High temperature Variation in close time Variation in open time MACHINE METHOD Preform holding Improper mixing FIGURE 16.5 s 3.5 s 2 1:10 — 170 kgf/cm² 4.4 Cause and effect diagram for the defect crack.  p  S/N ratio = 10 log   1  p  (16. they produce 360 disc pads. (13. The S/N data is also given in Table 16.13 and 16. The defectives are converted into fraction defective and transformed into S/N ratio using Eq.14 respectively. That is from each experiment the output was 360 items. TABLE 16.11 Trial no. .2) Data analysis (Response Graph Method): The response (S/N) totals of all the factor effects and the average response is tabulated in Tables 16.4) which is written below.14. Hence each shift’s production has been considered as one sample.Case Studies 305 Design of the experiment: Since one factor is at two levels and others are at three levels. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Assignment of factors for Case 16. The ranking of factor effects is given in Table 16. The order of experiments was random.2 Columns/Factors 3 4 5 C D E 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 1 2 3 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 1 A 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 e2 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 6 e6 1 2 3 2 3 1 3 1 2 2 3 1 1 2 3 3 1 2 7 G 1 2 3 3 1 2 2 3 1 2 3 1 3 1 2 1 2 3 8 H 1 2 3 3 1 2 3 1 2 1 2 3 2 3 1 2 3 1 Data collection: In each shift. The data collected is given in Table 16. L18 OA has been selected and the factors are randomly assigned to the columns (Table 16.12.12. These 360 items have been subjected to 100% inspection and defectives were recorded (both types of defects included). The vacant columns are assigned by e indicating error. The occurrence of defectives also has been less.11). 0028 0.2 Fraction defective (p) 0.98 –152.7 TABLE 16.50 –17.52 0 –25.57 –131.76 –10.45 –15.99 –62.0083 0.85 6.52 –17.22 –17.28 e2 –88.52 –25.51 –20.0194 0 0.2 D E –91.16 3 G –15.47 6 H –10. Accordingly the best (optimum) levels for the factors in the order of increasing ranking are given in Table 16.31 –104.52 Response totals for Case 16. .31 4 D –21.2 A –11.50 –25.31 –76.50 –10.23 –85.0111 0.12 Standard L18 OA Trial no.49 0 0 –20.15 5 C –16.07 Data for Case 16.0083 0 0.08 e6 –81.306 Applied Design of Experiments and Taguchi Methods TABLE 16.76 –3.0139 0.75 –15.56 –91.59 –17.04 –106.55 –20.0056 0 0 0.13 Factors Level 1 Level 2 Level 3 A –105.76 –14.72 –95.78 –16.0167 0 0.53 –106.0028 S/ N ratio –19.38 – C –100.57 –62.46 18.03 G –91.15. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Defectives per 360 items 4 6 0 2 0 0 3 0 1 0 1 1 7 0 5 3 4 1 TABLE 16.38 1 E –15.34 7.14 Factors Level 1 Level 2 Level 3 Difference Rank Average response and ranking of factor effects for Case 16.17 2 Optimum levels for the factors: The best levels for the factors are selected based on maximum S/N ratio (Table 16.57 H –63.22 –12.84 –17.77 –19.48 –90.70 0 –22.71 7.14).0028 0 0.77 0 –25.24 –58.0111 0.10 2.93 – 5.04 0 –18.04 –112.55 –88.0028 0. 3 A STUDY ON THE EYE STRAIN OF VDT USERS An experiment has been designed to investigate some of the factors affecting eye strain and suggest better levels for the factors so as to minimize the eye strain. exposure time.16.. The experiment has been run for three replicates (three shifts).17 Factors Factors and their proposed levels for Case 16. design of work station. (E) Pressure (F) Optimum level of factors for Case 16. reference material.17) have been selected for study after discussing with eye specialists.15 Factors Level D 3 Factors and their levels for Case 16.5 s 4 1:15 165 kgf/cm² Confirmation experiment: A confirmation experiment has been run using the proposed optimum parameter levels. the results obtained for three consecutive shifts were zero defectives. lighting.16 Factors Temperature (A) Close time (B) Open time (C) No. TABLE 16. Unpublished undergraduate project report supervised by Krishnaiah.0 s 4. (Professor).3 Proposed levels of study Level 1 Viewing angle (A) Viewing distance (B) Exposure time (C) Illumination (D) 0° (at eye level) 40 cm 60 min 250 lux Level 2 –20° (below eye level) 70 cm 120 min 600 lux . The following factors and levels (Table 16. From the confirmation experiment. Source: Quality Improvement using Taguchi Method. TABLE 16. Anna University. of vents (D) Paranol mix. nature of work. radiation and heat. K. 2007.2 Optimal level 143°C–149°C 4.Case Studies 307 TABLE 16. environmental conditions. This result validates the proposed optimum levels for the factors. Selection of factors and levels: There are several factors which affect the eye strain such as orientation of display screen. 16.2 H 1 E 3 C 2 A 1 G 2 The optimal levels for the factors in terms of their original value are given in Table 16. Department of Industrial Engineering. 19.3 Columns/Factor effects 7 8 9 10 11 C C A B A D C C C D 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 1 A 2 B 3 A B 4 A B C 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 5 B D 6 A B D 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 12 D 13 A D 14 B C D 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 15 A B C D 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 Selection of samples: The study has been carried out over a selected sample of subjects within the age group of 20–25 years having normal vision.308 Applied Design of Experiments and Taguchi Methods Design of the experiment: Since the study involves four factors each at two levels. Four replications have been obtained. That is. we have a 24 full factorial design.18 Trial no. Hence. Data has been analysed using ANOVA. Data collection: The study has been carried out in a leading eye hospital in Chennai. The initial ANOVA is given in .18. TABLE 16. India. It was measured by estimating the lag of accommodation. Data analysis: Table 16. They all had same level of experience. L16 OA has been selected for study. all the main factors and their interactions can be studied. Response variable: The extent of eye strain has been measured by dynamic retinoscopy. The assignment of factors and interactions is given in Table 16. The experimental results are given in Table 16.20. Assignment of factors and interactions for Case 16. 65 0.48 = 4.0478 0.40 0.0087 0.29 Sum of squares 0.55 TABLE 16.00 0.4875 0.0478 0.65 1.65 0.13* 0.00 0.90 R4 1.0087 0.4306 0.65 0.29 92.3369 0.25 2.0087 0.85 1.00 0.50 1.0478 1.00 2.60 29.65 0.0087 0.05.40 0.90 Response total 6.65 0.4306 0.50 1.75 6.75 1.4583 *Significant at 5% level.0087 0.7001 17.75 1.1181 0. ANOVA (initial) for Case 16.10 2.65 0.14* 1.0087 0.40 0.00 2.90 0.65 0.90 0.4875 0.65 0.0009 14.65 0.0009 14.10 3.90 0.50 1.90 R3 1.65 0.3 Degrees of freedom 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 48 63 Mean square 0.0244 0.0244 0.50 1.0009 0.25 6.0478 0.75 2.19 Standard L16 OA Trial no.25 2.60 0.25 1.75 1.50 1.68 3.50 1.50 2.65 0.0478 0.40 0.50 1.00 0.65 0.40 0.85 3.Case Studies 309 TABLE 16.50 2.90 1.0244 0.1.50 1.35 5.20* 1.1181 0.3 R2 1.1650 0.29 0.3369 0.50 1.20 Source A B AB ABC BD ABD CD C AC BC ACD D AD BCD ABCD Error Total F0.0009 0.0244 0.0478 1.06 999.90 Experimental results for Case 16.25 8.90 1.90 0.00 0.35 3.50 2.048.0145 F0 0.50 1.00 0.1650 0.90 1.65 0.00 6.00 0.60 3.85 2.75 5.69* 0.50 2. .68 11.50 2.00 0.50 1.40 0.06 8.37* 3. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 R1 1.65 0.60 Grand total = 74.75 1.50 8. TABLE 16.23 Factors Viewing angle (A) Viewing distance (B) Exposure time (C) Illumination (D) Optimal levels of factor for Case 16.45 100.4583 *Significant at 5% level.310 Applied Design of Experiments and Taguchi Methods The insignificant effects are pooled into the error term so that approximately one-half of the effects are retained in the final ANOVA (Table 16.5250 0.4306 0.0478 0.65* 30.3369 0. for determining the optimum levels for the factors the average response for all possible combinations of the three factor interactions ABC and ABD are evaluated and given in Table 16.0478 0.23.95 0.5875 0.4875 1. B.68* 3. D and the three factor interactions ABC and ABD are significant.21 Source B C D BC CD ABC ABD ABCD Pooled error Total F0. C.22 Evaluation of ABCD interaction for Case 16.21).0478 0. The optimal levels for the factors are given in Table 16.3 Level –20° 70 cm 60 cm High (600 lux) .38 8.27 0.1181 0.00 Degrees of freedom 1 1 1 1 1 1 1 1 55 63 Optimum levels for the factors: From Table 16.48* 3.1650 0.68 0.02. Hence. the following combinations are evaluated (Table 16.27 0.38 3.0478 0. We have to find the optimal levels for all the four factors A. Sum of squares 14.0141 F0 1025.76* 94.3 Average response 0.24.0478 0.1650 0. C and D. the optimum condition is A2B2C1D2. the combinations A1B2C1 and A2B2D2 are selected (Table 16. it is found that the main factors B.98 7.24).7125 Combination A 1B 2C 1D 1 A1B 2C 1D 2 A2 B 2 C1 D 2 A 1B 2C 2D 2 For minimization of eye strain.47 0.05.21.22).3 Mean square 14.1181 0.0478 0. Since the objective is to minimize the eye strain.1. TABLE 16.36* 11.3369 0. ANOVA (final) for Case 16.38 C(%) 82.4625 0.66 2.55 = 4. TABLE 16.4306 0.27 4.7768 17.4875 1. Since D is not present in A1B2C1 and C is not appearing in A2B2D2. 65 At the optimal condition A2B2C1D2.25 Trial no.5312 0.5875 0.1648) = 0.7812 0.O 2  MSe  [(1/neff ) + 1/r ] (16.5562 0.1.5875 ABC Interaction Combination Average response A1B 1C 1 A1B 1C 2 A1 B 2 C1 A1B 2C 2 A2B 1C 1 A2B 1C 2 A2B 2C 1 A2B 2C 2 1.7125 0.3 1 0.3) (16.55 = 1.8434 0.1648.25.24 Evaluations of ABC and ABD Interactions for Case 16.7187 1.0141 .55 = 4.65 2 0.4218 1. Optimum predicted response: 74.4625 – 1.3 ABD Interaction Combination Average response A 1B 1D 1 A 1B 1D 2 A 1B 2D 1 A 1B 2D 2 A 2B 1D 1 A 2B 1D 2 A 2B 2D 1 A2B2D2 1. Response Data for confirmation experiment for Case 16.55.02 neff = N/(1 + total df considered for prediction) = 64 = 32 (1 + 1) r = number of replications in confirmation experiment = 5 MSe = Error mean square = 0.5625 0. the mean value (m) obtained is 0. TABLE 16. 64 The optimum predicted response at the optimum condition is The grand average (Y ) = N pred = Y + (A2 B2C1D2  Y ) = 1.6500 1.4625 Confidence interval for confirmation experiment CI = FB .8662 1.Case Studies 311 TABLE 16.7500 1.8062 0.8062 Confirmation experiment: This was conducted with the proposed optimal levels and five replications were made.O 1.65 5 0.4) F0.40 3 0.05.40 4 0. The data from the confirmation experiment is given in Table 16.5000 1.1648 + (0. 348 0.4 Present setting 6 mm 3 mm 30 mm min B2 24 0. N. and Manikandan. Even if a single joint fails the accessibility for repair would be a serious problem and costly. .1145  32  5  The mean obtained from confirmation experiment (m) = 0.4625 + 0.577 The confidence interval lower limit = (mPred – CI) = 0. As the response variable is defects.0141    +  = 0. vol.348 £ mÿ £ 0. Source: Kesavan. attribute data analysis is applicable. The confidence interval upper limit = (mPred + CI) = 0. November 2002. The entire length of the tube consists of thousands of joints out of which the flash butt welded joints will be about 30%.577 That is.55 Therefore... each at three levels.1145 = 0. XXXI. Selection of factors and levels: The factors are selected from the supplier’s manual and the welding process specification sheet issued by the welding technology centre of the company. R.4 OPTIMIZATION OF FLASH BUTT WELDING PROCESS This study has been conducted in one of the leading boiler manufacturing company in India.8 mm After discussing with the shop floor engineers it was decided to study all factors. Ergonomic Study of eye strain on video display terminal users using ANOVA Method. Krishnaiah K. the estimated value from the confirmation experiment is within the confidence interval.312 Applied Design of Experiments and Taguchi Methods CI =  1  1  4. 11.27.. 16. TABLE 16. The selected parameters and their present levels are given in Table 16.26. the experiment is validated.26 Factors Flash length (A) Upset length (B ) Initial die opening (C) Voltage selection (D) Welding voltage relay (E) Forward movement (F) Selected parameters for Case 16. The proposed settings of the factors are shown in Table 16.02  0. One of the major problems facing the company is tube failure. this study has been undertaken to optimize the welding process parameters to avoid defects. Thus. no. So. Journal of Indian Institution of Industrial Engineering. 28 Trial no.27 Factors Proposed levels for experimentation of Case 16.4 Level 1 6 3 40 min B3 23 0.2 Flash length (mm) (A) Upset length (mm) (B) Initial die opening (mm) (C) Voltage selection (D) Welding voltage relay (E) Forward movement (mm) (F) Conduct of experiment: The six factors each at three levels (12 degrees of freedom) is to be studied. After conducting the experiment. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 2 A 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 Experimental design and results for Case 16. . The experiment was conducted and data are collected which is given in Table 16. TABLE 16. the crack observed in the bend test is considered as response variable.28.9 Level 3 8 5 42 B5 29 1. A sample size of 90 has been selected for the whole experiment.6 Level 2 7 4 41 B4 26 0. Hence L18 OA has been selected and the factors are assigned as given in Table 16.Case Studies 313 TABLE 16.4 3 B 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 4 C 1 2 3 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 5 D 1 2 3 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 6 E 1 2 3 2 3 1 3 1 2 2 3 1 1 2 3 3 1 2 7 F 1 2 3 3 1 2 2 3 1 2 3 1 3 1 2 1 2 3 Good 4 0 1 0 1 3 3 3 4 0 2 3 5 5 3 0 0 3 Bad 1 5 4 5 4 2 2 2 1 5 3 2 0 0 2 5 5 2 Each trial was repeated for 5 times and in that the good one indicates those units which have passed the bend test without cracking and the bad one represents a failed unit.28. 314 Applied Design of Experiments and Taguchi Methods Data analysis considering the two class data as 0 and 1 We consider good = 0 and bad = 1. TABLE 6. That is. The response totals for each level of all factors is given in Table 16.7778 90 = Total sum of squares (SSTotal) = T – CF = 50 – 27. SSC = 0.2222 2 A1 A2 A2 + 2 + 3  CF nA1 n A2 n A3 SSA = = (20)2 (13)2 (17)2 + +  CF 30 30 30 = 28.8222 Similarly.6888.2888. other factors sum of squares are SSB = 0.31. SSF = 1.29.0888 SSe = SSTotal – (sum of all factor sum of squares) = 16.30 and 16.0226 The Analysis of Variance is given in Tables 16.1555. .7778 = 0.29 Factors Level 1 Level 2 Level 3 A 20 13 17 Response totals for Case 16.4 B 18 19 13 C 17 15 18 D 10 21 19 E 12 18 20 F 13 21 16 Computation of sum of squares: Good = 0 and bad = 1 Grand total (T) = 50 (sum of all 1s.1555.6000 – 27. bad items) Correction Factor (CF) = T2 (N = total number of parts tested) N (50)2 = 27.7778 = 22. SSE = 1. SSD = 2. 05. TABLE 16.0888 16.97 1.1444 0.65 0. namely voltage selection is significant.10 0.4111 0.30 5.0226 22.70 3. Pooled ANOVA for Case 16.2222 Degrees of freedom 2 2 2 83 89 *Significant at 5% level Optimal levels for the factors: The result shows that only one factor (D).0777 1.4 Optimal level 7 mm 5 mm 40 mm min B3 23 0. For the significant factor.4 Mean square 0.2888 1.71 2. .5444 0.37 5.37* 2.5444 0.2888 1.61 C(%) 3.4 Mean square 1.77 = 3.50* 2.70 10. one-half of the factors have been considered (Pooled ANOVA). The recommended levels are given in Table 16.15.1555 1.1555 2.90 79.20 4. also the best levels are selected from the average response.8222 0.2131 F0 5.77 2.55 C(%) 10. ANOVA for Case 16.5777 0.10 100.0888 17.1555 1.1444 0.00 Sum of squares 0.2080 F0 1.2.5777 0.32.90 72.05.11.6888 0.83 = 3.30 5.3444 0.30 Source A B C D E F Error (pure) Total F0.6891 22.Case Studies 315 TABLE 16.2222 *Significant Degrees of freedom 2 2 2 2 2 2 77 89 TABLE 16. level is selected based on minimization of average response and for the remaining factors.60 100.2.00 Sum of squares 2.31 Source D E F Pooled error Total F0.32 Factors Flash length (A) Upset length (B ) Initial die opening (C) Voltage selection (D) Welding voltage relay (E) Forward movement (F) Recommended optimum levels for Case 16.20 4.6 mm Predicting the optimum process average: For predicting the process average. .4000 13/30 = 0.316 Applied Design of Experiments and Taguchi Methods The optimum predicted response at the optimum condition is N Pred = Y + (D1  Y ) + (E1  Y ) + (F1  Y ) (16.2271 Therefore.3333 12/30 = 0.33). This is used for predicting the optimum average.1405 – 0. the confidence interval for the optimum predicted mean is mPred – CI £ mPred £ mPred + CI 0. WPred = 0.7609 –1.2271 £ mPred £ 0.0778 = 0.6) The W value is used for predicting the optimum average.0109 – 0. mPred = 1 + 10  :/10 1 = 0. TABLE 16.4333  p/100  W(db) = 10 log    1  p/100  Omega(W) value 0.2131  0. Confirmation experiment: This has been conducted with the recommended factor levels on ten units and all the ten have passed the test. Confidence interval for the predicted mean CI = FB .33 Fraction defectives and omega transformation for Case 16.8) = 3.1656 E1 F1 (16.O 2  MSe  1 neff (16.9681) + (–1.1405 + 0. the average predicted value of p = 0.O1.5555 10/30 = 0.8736 Converting back omega value.2271 – 0.0866 £ mPred £ 0.9681) = –7.4 Factors Overall mean( Y ) D1 Fraction defective 50/90 = 0.0109 –1.5) Since the problem is concerned with attribute data.7609 – 0.7) That is. the fraction defective is computed and the omega transformation has been obtained (Table 16.11  0.1656 – 0.1405 (16.3676 This indicates that the predicted average lies within the confidence interval.9681 + (–3.9681 –3.14.9681) + (–1. 3323. A. Selection of factors and levels It was decided to study all the factors at two levels.9) = The interval range is 3.34.Case Studies 317 Confidence Interval for the confirmation experiment: From the confirmation experiment the average fraction defective is zero (no defectives observed). .0778 + 0.O 1. it was decided to improve quality by optimizing the process parameters using Taguchi Method.5 WAVE SOLDERING PROCESS OPTIMIZATION This case study was conducted in a company manufacturing Printed Circuit Boards (PCBs) which are used in scanners.3676) overlaps fairly well with the confidence interval of the confirmation experiment (–0. CI = FB . Thus. The proposed levels for the factors are given in Table 16. we can conclude that the experimental results can be reproducible.. The following process parameters (factors) were selected for study.O 2  MSe  [(1/neff ) + 1/r ] (16.3433 – 0.2131  (0. Also. Chennai.0866. Professor.3323). 0. Unpublished Graduation thesis Supervised by Krishnaiah K. 0. Improving quality of flash butt welding process using Taguchi method. While zero defects are the goal. the values are changed marginally. 16. the present level of defects is very high. May 2007. Identification of factors: A detailed discussion was held with all the engineers concerned and the operators and decided to study the process parameters. India. decided not to disturb the routine production schedule. Conclusion: The confidence interval for the predicted process average (–0..3423 The confirmation value lies within the confidence interval. After soldering.1) = 0. Source: Jaya. · · · · Specific gravity of flux Preheat temperature of the board Solder bath temperature Conveyor speed The present setting and operating range of these parameters is given in Table 16. The process engineer suggested not to change the parameter values too far because he was doubtful about the output quality. These PCBs after assembly soldered using a wave soldering machine. Anna University. From the company it was learnt that the soldering machine was imported from Japan and the levels of the various parameters of the process set initially were not changed.3423 £ mconfirmation £ 0.35. Department of Industrial Engineering. high quality of PCBs is required. Hence.11  0. Hence. Hence. The present average level of defects is 6920 ppm. each PCB is inspected for soldering defects. Hence all the PCBs are classified based on the number of holes/cm2 into high density and low density. The variation in the operating range of factor A = 0. Since the number of defects in a PCB is low. For example. consider A . Note that the response (Y) given is the sum of two replications.82 = 0. Measurement method: There are about 60 varieties of PCBs differing in size and number of holes.00 TABLE 16. Hence.838 195 258°C 0.02 = 0.5 Level 1 0. the following measure has been used as response variable.002 = 0.820 + 0. L16 OA has been selected.02 = 0.84 m/min Level 2 0. The assignment of factors and interactions is given in Table 16.20 m/min Proposed factor levels for Case 16.822 Level 2 = upper value of operating range –10% of 0. of holes Design of experiment and data collection: To study 4 main factors and 6 two-factor interactions.34 Factors Specific gravity of flux (A) Preheat temperature (B) Solder bath temperature (C) Conveyor speed (D) Present setting of the factors for Case 16.825 180°C 246°C 1. the two levels are fixed as 180 ± 15. BC. of defects in a PCB  1000 Total no.840 – 240°C–260°C 0.36. a given experiment was conducted.840 – 0. And all higher order interactions can also be analysed. No. The data collected is also tabulated in Table 16.838 For factor B. . BD and CD may have an affect on the response. it was suspected that the two factor interactions AB. High density: more than 4 holes/cm2 Low density: up to 4 holes/cm2 High density PCBs are considered for this study. In addition to the main factors.318 Applied Design of Experiments and Taguchi Methods TABLE 16. where 180 is the present setting.96 Present setting 0. AD.822 165°C 242°C 0.36. Whenever these types of PCBs are scheduled for production.5 Operating range 0.84 – 0.02 Level 1 = starting value of operating range +10% of 0.80–1. it will be a 24 full factorial experiment.35 Factors Specific gravity of flux (A) Preheat temperature (B) Solder bath temperature (C) Conveyor speed (D) The levels for factors A.002 = 0. C and D have been fixed as follows.820–0. AC. 3 12. + (5.37.2 29.7 11.5 7..88 .7)2 – = 303..0 10.36 Trial no.5 16. The response totals for all the factors and interactions are given in Table 16. Computation of sum of squares: Grand total (T) = 188 Total number of observations (N) = 32 Overall mean ( Y ) = 188 = 5.6 3.9 14.9 9.1)2 + .Case Studies 319 TABLE 16.0 Y = Response = Sum of two replications Data analysis: The data is analysed considering the response as variable data.7 7.8 11.875 32 Correction factor (CF) = T2 (188)2 = 32 N (188)2 32 SSTotal = (5.0 14.9 6.6 12.4)2 + (8.5 (L16 OA) 3 B C 4 D Columns/Factors and interactions 5 6 7 8 9 10 11 C B B A A A A D D C B B C D D C D D D 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 12 A B C 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 13 14 A A B C 15 A Y 1 C 2 B 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 13.4 6. Assignment of factors and interactions for Case 16. 28 1. TABLE 16.7 95.7 74.5 109.031 30.781 0.3 92.54 20.09 0.320 Applied Design of Experiments and Taguchi Methods SSA = 2 A1 A2 + 2  CF nA1 nA 2 = (108)2 (80)2 (188)2 +  16 16 32 = 24.9 101.21 21.605 45. the final ANOVA is given in Table 16.2 113.8 88.8 89.605 45.7 Factor effects BD CD ABC ABD ACD BCD ABCD Level 1 74.04 0.8 93.205 36.2 98.66 13.38). the sum of squares of all effects are computed and summarized in the initial ANOVA (Table 16.125 30.20 0.811 0.2 TABLE 16.2 94.402 303.5 94.205 3.5 78.601 7.5 107.500 4.031 21.601 2.205 3.002 1.8 74.3 113.005 2.9 89.5 Mean square 24.50 Similarly.761 49.005 0.54 20.5 93.275 F0 10.1 86.5 81 109. After pooling the sum of squares as usual.8 Level 2 113.97 Degrees of freedom 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 16 31 .0 78.125 30.77 1.5 Level 2 80.34 0.601 7.211 45.38 Source A B C D AB AC AD BC BD CD ABC ABD ACD BCD ABCD Error Total Sum of squares 24.500 4.37 Factor effects A B C D AB AC AD BC Level 1 108 88.205 2.39.211 45.005 2.031 30.04 3.811 0.761 49.01 13.2 99.85 9.3 Response totals for Case 16.880 Initial ANOVA for Case 16.031 21.601 2.1 98.781 0.0 99.005 0. 775 = 5.32 100.800 A1C1D2 = 5.675 7. Since the objective is to minimize the response (measure of defects). and BD) and the three-factor interactions ABC and ACD are significant.95 9.601 303.500 21.575 A1B2 = 7.26* 21.5 Mean square 24.075 A2 C2D1 = 5.5 15. the combinations A2C1D1 = 2. B.450 A2B2 = 4.88 10.16 F0 11.500 B1 D1 = B1 D2 = B2 D1 = B2 D2 = A 2 B 1 C1 A 2 B 1 C2 A 2 B2 C1 A 2 B 2 C2 4. Since B is not present in A2C1D1 and D is not in A2B2C1.605 45.675 = 6.438 A2 C1 = 4.40).05.675 and A2B2C1 = 2. the optimal combination is A2C1D1 B2.375 A2 C2 = 5.350 6.01 16.88 *Significant Degrees of freedom 1 1 1 1 1 1 1 1 23 31 Optimum levels for factors: From Table 16.70 Therefore.125 30.23 = 4.500 21.550 A1C1D1 = 10.00 7.39.Case Studies 321 TABLE 16. AC.69* 21.925 A2B1 = 5. C and D.60 A2 B2C1D1 = 1.75 A2 C1D1B2 = 1.425 A2 C2D2 = 5.375 = 10. Final ANOVA for Case 16.01 2.825 A1 D1 = A1 D2 = A2 D1 = A2 D2 = A 1 B 1 C1 A 1 B 1 C2 A 1 B 2 C1 A 1 B 2 C2 Mean values of significant effects for Case 16. it is found that only one main factor (A).90* 14.601 7.605 45.763 5.675 are selected (Table 16.050 5.450 .5 A2 = 5.78* 13.39 Source A AB AC AD BD CD ABC ACD Pooled error Total F0. The existing and proposed (optimum) levels are given in Table 16.11* 3.40.14 15. the two-factor interactions (AB.325 A1C2D1 = 4.11* C(%) 8.463 5.75 A1B1 = 5.41. Hence for determining the optimum levels for the factors the average response for all possible combinations of these effects are evaluated and given in Table 16. A2 C1D1B1 = 3.28.005 49.601 7.012 = 6.811 45.950 = 5.52 22.601 49. TABLE 16.738 4.150 A1 C1 = 8.800 = 2.031 30.675 A2 C1D2 = 6.13 16.06 6.60 A2 B2C1D2 = 3.125 30.350 = 5.005 2.1.100 = 4. We have to find the optimal levels for all the four factors A.00 Sum of squares 24.031 30.625 A2C1D1 = 2.811 45.601 49. AD. the mean values of the following combinations are obtained.063 A1 C2 = 5.34* 9.40 A1 = 6.725 A1C2D2 = 6. The vacant columns are assigned by e . Source: Krishnaiah. namely.5 20.41) TABLE 16. 31–Feb.6 Level 1 6.875 + (1. Jan.0 Design of experiment: Since the study involves three main factors each at three levels and three two factor interactions. scan speed (B) and quench flow rate (C) have been selected and decided to study all factors at three levels. K. National Conference on Industrial Engineering Towards 21st Century.875) = 1. Tirupati. Table 16.60 – 5. 2.60 Confirmation experiment: A confirmation experiment with two replications at the optimal factor settings was conducted.6 APPLICATION OF L27 OA A study was conducted on induction hardening heat treatment process. The experimental design layout is given in Table 16. AC and BC may also influence the surface hardness and hence these are also included in the study.5) was used to assign the main factors and interactions. S.. Organized by the Dept. L27 OA has been selected. . The following linear graph (Figure 16.42 Factors Power potential (kW/inch2) (A ) Scan speed (m/min) (B) Quench flow rate (l/min) (C) Factors and levels for Case 16. Since each interaction has four degrees of freedom. indicating error. 1998. it was suspected that the two factor interactions AB.5 Factors Specific gravity of flux (A) Preheat temperature (B) Solder bath temperature (C) Conveyor speed (D) Existing Level 0.0 Level 2 7. University. The three induction hardening parameters.5 1. TABLE 16.V.5 Level 3 8.5 2.0 m/min Proposed Level 0.10) Predicted optimum response (mPred) = Y + (A2B2C1D1 – Y ) = 5. It was found that the defects reduced from 6920 ppm to 2365. In addition. of Mechanical Engineering.43.5 2. the interaction is assigned to two columns.5 15.0 17.838 195°C 242°C 0. 16.84 m/min (16.42 shows the control parameters and their levels.825 180°C 246°C 1. power potential (A).322 Applied Design of Experiments and Taguchi Methods The existing and proposed (optimum) levels are given below (Table 16. a reduction of 65%.41 Existing and Proposed settings for Case 16. Taguchi Approach to Process Improvement—A Case Study. 6 ( L27 OA) Trial no.Case Studies A1 323 3. The surface hardness (Y) measured is given in the last column of Table 16. Data collection: The order of experimentation was random.6.7 D B2 8. Only one replication was made. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 1 A 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 2 B 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 3 AB 1 1 1 2 2 2 3 3 3 2 2 2 3 3 3 1 1 1 3 3 3 1 1 1 2 2 2 4 AB 1 1 1 2 2 2 3 3 3 3 3 3 1 1 1 2 2 2 2 2 2 3 3 3 1 1 1 5 C 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 6 AC 1 2 3 1 2 3 1 2 3 2 3 1 2 3 1 2 3 1 3 1 2 3 1 2 3 1 2 7 AC 1 2 3 1 2 3 1 2 3 3 1 2 3 1 2 3 1 2 2 3 1 2 3 1 2 3 1 8 BC 1 2 3 2 3 1 3 1 2 1 2 3 2 3 1 3 1 2 1 2 3 2 3 1 3 1 2 9 e 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 1 2 3 3 1 2 1 2 3 2 3 1 10 e 1 2 3 2 3 1 3 1 2 3 1 2 1 2 3 2 3 1 2 3 1 3 1 2 1 2 3 11 BC 1 2 3 3 1 2 2 3 1 1 2 3 3 1 2 2 3 1 1 2 3 3 1 2 2 3 1 12 e 1 2 3 3 1 2 2 3 1 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 1 2 3 13 e 1 2 3 3 1 2 2 3 1 3 1 2 2 3 1 1 2 3 2 3 1 1 2 3 3 1 2 Y 80 80 77 77 82 81 80 79 79 80 79 75 80 78 80 81 74 79 66 62 59 68 65 65 61 64 61 .5 Required linear graph for the Case 16.11 C5 7 FIGURE 16.43. TABLE 16.43 Experimental design layout and data for Case 16.4 6. SSC = 16. AC: Col.00 Similarly. SSAC = SScol.10. TABLE 16. 4. The sum of squares of interaction. SSe10 = 0.77.55 + 20. 11 = 6. BC: Col.00. SSe9 = 16. A B C AB: Col. Col.44 Response (hardness) totals for all effects of Case 16.66 + 6.67 SSA = 2 A1 A2 A2 + 2 + 3  CF nA1 nA2 nA3 = (715)2 (706)2 (571)2  CF + + 9 9 9 = 1446. 4 = 4. SSAB = SScol.23.324 Applied Design of Experiments and Taguchi Methods Data analysis: The response totals for all the factors. say AB is the sum of SScol. e9 e10 e12 e13 3 4 6 7 8 11 Computation of sum of squares: Grand total = 1992 Total number of observation = 27 Correction factor (CF) = (1992)2 = 146965. interactions and the vacant columns are given in Table 16.33 27 SSTotal = (80)2 + (80)2 + (77)2 + . These sum of squares are given in the initial ANOVA Table 16.22.88. 6 + SScol. Col. the other sum of squares is calculated (main effects and vacant columns). 8 + SScol.66..88 + 20. SSe12 = 6.45.22 = 27.22 = 21. Col. SSB = 24. .44.58. SSBC = SScol. SSe13 = 11. 3 and SScol.00 = 10. 7 = 1.6 Level 1 715 658 673 669 661 662 653 669 670 665 663 658 672 Level 2 706 676 663 660 661 667 670 665 669 655 665 669 662 Level 3 571 658 656 663 670 663 669 658 653 672 664 665 658 Factor effect/Vacant col. + (61)2 – CF = 1580.23.. 3 + SScol. *Significant Conclusion: This case shows how interactions between three level factors can be handled in Taguchi experiments. the optimal levels for the factors based on maximum average response are A1.44).97 100.2.48 1.45 1580.03 5. These interactions are pooled into the error term and the final ANOVA is given in Table 16.22 AB 10.Case Studies 325 From Table 16.92 Total 1580.46 Final ANOVA with pooled error for Case 16.77 BC 27.52 1. TABLE 16.00 F0.8 = 3. it is seen that the interactions are not significant and their contribution to the variation is also negligible.00 24.00 8. B2 and C1 (Table 16.85 0.05.45 Source Sum of squares ANOVA (initial) for Case 16.72 F0 153.03 0.24 1.44 6.00 12.4.11 4.46.22 94.72 2.11 2. Even after pooling it is seen that the main effect A alone is significant.45.67 Degrees of freedom 2 2 2 20 26 Mean square 723.20 = 3.17 2.00 12.46 F0.61 1.55 C(%) 91.21 100.67 5.6 Source A B C Pooled error Total Sum of squares 1446.37 1.6 Mean square 723.78 4.44* 2.10 Error (vacant columns) 34.2. Since the objective is to maximize the hardness.52 1.00 Degrees of freedom 2 2 2 4 4 4 8 26 A 1446.8 = 4. TABLE 16.05.54 1.67 *Significant F0.49.48 1.74 1.00 B 24.67 1.84.71 C(%) 91.00 8. .37 F0 165.00 16.05.00 C 16.66 AC 21. 2 0.05 0199 0596 0987 1368 1736 2088 2422 2734 3023 3289 3531 3749 3944 4115 4265 4394 4505 4599 4678 4744 4798 4842 4878 4906 4929 4946 0.1 STANDARD NORMAL DISTRIBUTION TABLE z 0.1 1.03 0120 0517 0910 1293 1664 2019 2357 2673 2967 3238 3485 3708 3907 4082 4236 4370 4484 4582 4664 4732 4788 4834 4871 4901 4925 4943 0.4 1.02 0080 0478 0871 1255 1628 1985 2324 2642 2939 3212 3461 3686 3888 4066 4222 4357 4474 4573 4656 4726 4783 4830 4868 4898 4922 4941 0.8 1.09 0359 0753 1141 1517 1879 2224 2549 2852 3133 3389 3621 3830 4015 4177 4319 4441 4545 4633 4706 4767 4817 4857 4890 4916 4936 4952 (Contd.2 1.0 2.01 0040 0438 0832 1217 1591 1950 2291 2611 2910 3186 3438 3665 3869 4049 4207 4345 4463 4564 4649 4719 4778 4826 4864 4896 4920 4940 0.4 2.A P P E N D I X A 0.04 0160 0557 0948 1331 1700 2054 2389 2704 2995 3264 3508 3729 3925 4099 4251 4382 4495 4591 4671 4738 4793 4838 4875 4904 4927 4945 327 A A.6 1.3 0.00 0000 0398 0793 1179 1554 1915 2257 2580 2881 3159 3413 3643 3849 4032 4192 4332 4452 4554 4641 4713 4772 4821 4861 4893 4918 4938 0.4 0.0 1.9 1.6 0.08 0319 0714 1103 1480 1844 2190 2517 2823 3106 3365 3599 3810 3997 4162 4306 4429 4535 4625 4699 4761 4812 4854 4887 4913 4934 4951 0.8 0.0 0.1 2.) .5 0.1 0.5 0.7 0.9 2.3 2.06 0239 0636 1026 1406 1772 2123 2454 2764 3051 3315 3554 3770 3962 4131 4279 4406 4515 4608 4686 4750 4803 4846 4881 4909 4931 4948 0.07 0279 0675 1064 1443 1808 2157 2486 2794 3078 3340 3577 3790 3980 4147 4292 4418 4525 4616 4693 4756 4808 4850 4884 4911 4932 4949 0.5 1.3 1.7 1.2 2. 660 2.711 1.337 1.048 2.353 2.179 2.303 3.930 3.920 2.921 2.160 2.421 3.796 1.753 1.717 1.04 4959 4969 4977 4984 4988 0.303 1.8 2.282 Area in the right tail under the t distribution curve 0.533 1.131 2.9 3.476 1.093 2.718 2.761 1.069 2.262 2.323 1.528 2.232 3.821 6.947 2.045 2.328 z 2.771 2.201 2.701 1.07 4962 4972 4979 4985 4989 0.09 4964 4974 4981 4986 4990 A.756 2.729 1.706 4.893 5.05 4960 4970 4978 4984 4989 0.579 3.733 3.228 2.311 1.314 1.671 1.307 3.725 1.074 2.650 2.646 3.318 1.782 1.807 2.01 0.518 2.697 1.617 2.467 2.977 2.763 2.485 2.886 1.943 1.998 2.086 2.390 2.025 3.440 1.960 31.714 1.657 9.860 1.567 2.408 3.396 3.358 2.215 7.02 4956 4967 4976 4982 4987 0.182 2.090 .610 3.120 2.552 3.415 1.852 3.527 3.064 2.05 0.080 2.704 2.552 2.501 4.508 2.764 2.708 1.750 2.785 4.812 1.306 2.779 2.878 2.707 3.06 4961 4971 4979 4985 4989 0.310 1.397 1.110 2.845 2.160 3.624 2.106 3.703 1.309 22.457 2.021 2.505 3.314 2.005 6.447 2.467 3.706 1.831 2.08 4963 4973 4980 4986 4990 0.898 2.747 3.330 1.539 2.056 2.000 1.025 0.583 2.143 2.684 1.896 2.485 3.473 2.541 3.355 3.333 1.042 2.001 318.638 1.6 2.797 2.645 12.250 3.060 2.821 2.734 1.032 3.327 10.686 3.325 1.315 1.297 4.925 5.385 3.328 1.602 2.895 1.326 63.012 2.771 1.313 1.144 4.173 5.2 THE df 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 60 120 ¥ t DISTRIBUTION TABLE 0.01 4955 4966 4975 4982 4987 0.576 0.0 Appendix A 0.861 2.423 2.499 3.296 1.03 4957 4968 4977 4983 4988 0.479 2.721 1.787 2.365 2.7 2.492 2.746 1.015 1.980 1.658 1.435 3.699 1.965 4.00 4953 4965 4974 4981 4987 0.450 3.383 1.833 1.052 2.356 1.101 2.571 2.372 1.316 1.363 1.841 4.289 1.787 3.10 3.740 1.681 2.350 1.345 1.169 3.776 2.462 2.819 2.500 2.055 3.132 2.365 3.208 4.604 4.319 1.341 1.321 1.078 1.145 2. 85 2.30 2.83 2.18 2.61 5.60 4.13 7.55 6.97 3.52 3.03 1.5 18.64 3.39 2.82 2.68 3.99 5.66 2.84 2.76 2.37 8.01 2.94 2 199.3 PERCENTAGE POINTS OF THE (n1 = Numerator df and n2 F = denominator DISTRIBUTION: F0.18 3.98 3.92 2.96 2.12 2.8 19.60 2.75 4.08 2.13 2.35 4.32 2.66 2.34 2.59 3.70 2.16 5.30 2.5 19.74 4.24 2.9 19.47 3.63 2.53 4.14 4.12 6.13 3.49 4.96 2.28 4.05 3.70 4 224.81 2.30 4.26 4.07 3.61 2.07 2.85 2.10 3.65 2.81 6.40 2.36 2.7 19.18 3.26 3.25 9.55 2.95 2.60 2.20 2.74 2.20 3.45 2.29 3.0 19.94 5.69 2.33 3.32 4.59 3.42 2.87 3.01 8.06 3.04 4.80 2.86 3.99 2.23 3.85 6.46 2.45 2.08 4.74 3.53 2.49 2.10 3.27 2.Appendix A 329 A.51 10.39 3.01 2.39 3.64 2.82 4.31 4.40 3.29 2.74 2.68 3.54 2.87 2.96 4.59 5.76 2.28 6.62 2.24 3.33 2.66 2.22 3.03 9 240.00 9.80 2.34 3.03 3.44 3.60 2.6 19.27 2.38 4.90 2.10 3.40 2.41 3.45 4.73 3.36 3.85 2.55 3.34 2.71 2.05 4.37 2.79 5.79 2.46 2.84 3.21 3.41 2.) 230.88 4.79 3.94 6.54 4.84 2.54 2.79 5.42 2.17 4.53 2.12 3.44 2.41 4.16 3.32 5.59 2.35 3.07 3.15 3.64 2.19 .89 6.35 2.48 3.10 8 238.77 2.71 3.49 2.92 2.32 3.56 2.09 3 215.18 3.77 4.51 2.09 3.48 2.5 19.00 2.01 2.76 4.97 10 241.37 2.93 (Contd.11 3.61 2.25 2.71 2.14 2.46 4.19 4.89 3.85 2.71 2.42 2.9 19.05.49 2.34 2.58 2.93 2.51 2.57 2.59 2.48 3.28 2.21 2.37 3.41 4.12 4.40 2.44 3.35 4.45 2.71 6.67 4.51 2.78 2.28 3.70 2.96 4.69 3.49 3.81 3.50 3.07 1.35 8.26 4. n .46 5 7 236.30 19.23 3.79 2.91 2.63 3.49 3.09 4.49 2.63 3.90 2.68 2.75 2.00 4.39 5.54 2.26 6.55 2.84 4.20 3.03 3.90 2.40 8. n 1 2 df ) 6 n1 n2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 30 40 50 100 1 161.74 4.58 3.02 2.32 2.2 234.38 8.06 3.33 9.11 3.77 2.29 3.98 2.24 4.42 3.16 9.59 5.42 2.03 2.25 2.39 3.67 2.16 2.38 2.95 4.14 3. 40 3.92 1.57 2.81 3.91 1.94 3.04 1.79 1.52 50 251.35 2.93 1.38 2.84 1.59 2.91 1.27 2.01 1.66 2.03 3.53 2.02 1.70 4.70 1.51 2.80 4.74 1.10 2.98 1.07 2.16 2.70 5.20 2.33 2.24 2.89 2.58 5.77 20 248.50 2.28 2.8 19.10 2.13 2.39 2.28 3.25 2.51 3.18 2.70 2.330 n1 n2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 30 40 50 100 Appendix A 11 243.42 2.41 2.96 1.43 8.79 2.74 5.34 2.60 3.69 1.40 8.08 2.00 3.06 2.46 2.22 3.53 2.34 3.03 2.41 2.27 2.77 2.97 1.77 4.91 2.23 2.88 1.26 2.0 19.07 2.87 3.59 1.89 1.47 8.20 2.05 2.31 2.31 2.37 2.57 3.23 2.76 2.85 15 246.24 2.56 3.77 3.94 1.41 8.96 1.76 5.34 2.72 4.49 8.00 1.20 2.68 4.64 2.35 2.83 2.14 2.46 2.31 2.72 2.55 5.96 1.01 1.28 2.28 2.86 1.94 2.97 1.15 2.09 2.71 3.01 1.1 19.46 8.62 3.38 3.03 1.99 1.19 2.70 4.34 2.86 4.75 4.31 2.18 2.92 1.18 2.53 2.76 1.40 2.19 2.1 19.0 19.07 2.97 2.48 100 253.57 40 251.23 2.13 2.91 1.80 1.82 1.88 1.62 30 250.88 1.57 2.43 2.80 2.38 2.09 2.44 3.86 2.85 2.25 2.28 2.82 2.46 2.87 1.16 2.97 1.69 2.63 1.22 2.73 1.54 2.46 2.34 2.65 2.18 2.11 2.12 2.60 2.63 5.12 2.08 2.78 1.85 1.00 1.60 1.60 2.0 19.46 8.31 2.83 3.75 3.95 1.94 1.99 1.04 2.89 12 243.23 2.44 3.63 2.73 2.9 19.48 2.62 5.66 4.47 2.26 2.66 5.94 2.45 8.78 1.94 4.11 2.46 3.94 1.27 2.32 3.84 1.31 3.16 2.52 1.39 .15 2.15 2.07 2.11 2.69 1.40 2.41 3.68 25 249.84 1.3 19.48 8.02 2.20 2.07 2.0 19.62 2.94 1.72 2.50 3.19 2.78 1.10 2.01 2.52 3.51 2.66 1.87 1.04 2.59 5.15 2.02 2.04 2.91 4.00 1.05 2.12 2. 29 3.50 3.55 6.48 3.76 2.29 4.97 3.5 39.62 2.42 5.52 4.42 8.28 3.30 3.16 3.44 12.03 3.15 3.98 7.01 8.59 3.75 2.70 2.35 4.1 39.85 3.25 15.82 3.80 2.51 3.36 14.55 2.12 3.34 3.39 3.77 4.51 3.66 3.90 3.10 3.60 5.90 2.82 2.38 3.00 3.13 3.8 39.33 14.12 4.12 4.56 3.83 2.39 6.93 2.47 4.97 4.92 5 921.89 3.35 4.07 6.83 3 864.78 3.98 2.05 2.025.88 9.26 6.38 3.75 3.95 3.06 3.80 3.28 4.08 4.24 4.69 3.44 3.41 3.12 3.65 2.44 3.96 2.48 3.53 4.01 2.65 4.04 3.70 4.36 7.13 3.01 3.10 4.56 4.59 3.46 4.57 7.29 3.66 3.25 4 899.53 3.3 PERCENTAGE POINTS OF THE (n1 = Numerator df and n2 F DISTRIBUTION: = denominator F0.51 17.20 6.43 7.22 10.15 3.26 5.20 3.82 4.88 2.51 4.76 4.77 3.47 4.33 3.2 39.87 2.94 6.8 38.41 6.79 5.90 4.24 4.63 4.70 2.13 3.46 2.25 3.22 3.05 2.17 3.05 3.82 4.07 7.06 5.69 5.29 4.72 3.99 4.15 5.60 3.97 2.Appendix A 331 A.39 2.21 3.7 39.10 9.76 5.87 2.46 5.62 9.42 5.57 2.6 39.20 6.46 3.95 3.46 4.67 2.00 16.82 5.76 3.87 2.18 3.87 5.42 4.70 937.07 3.24 10 968.61 3.62 4.73 3.53 2.40 14.) n1 n2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 30 40 50 100 1 647.98 6.38 4.10 3.85 5.78 3.36 4.72 3.90 2.66 3.52 5.75 5.72 4.30 14.83 4.6 39.92 5.85 2.58 3.83 5.17 15.99 5.29 3.73 3.84 2.05 4.25 3.69 4.61 3.37 3.64 2.43 4.84 6.22 3.87 2.75 2.54 8.44 9.05 3.74 2.32 4.08 4.77 2.81 2.76 6.67 2.18 2 799.35 3.61 2.54 .12 6.98 5.20 3.71 5.91 2.89 5.72 5.44 3.62 5.03 2.68 5.90 6.04 5.25 3.02 2.89 3.92 2.48 4.42 8 956.09 3.30 6.23 5.n2 df ) 6 7 948.01 2.22 3.65 8.99 2.37 14.81 8.2 39.96 3.73 2.60 7.99 2.86 4.21 6.32 9 963.98 5.97 2.n1.04 10.34 5.18 4.84 2.39 3.57 5.47 8.93 2.54 6.50 3.78 2.45 2.06 2.51 2.15 4.41 3.18 (Contd.31 3.3 39.73 2.32 4.32 2.86 3.88 3.60 4.39 14.72 6.38 3.68 2.73 9.05 3.38 2. 18 2.26 2.64 2.39 2.51 2.31 2.67 3.12 1.27 5.01 4.71 40 1006 39.66 3.43 5.56 6.26 3.73 2.78 2.21 2.09 2.40 3.01 8.05 2.31 2.66 6.41 2.10 3.47 2.89 3.61 2.44 2.23 5.36 2.33 2.50 6.41 2.07 1.00 3.56 2.88 1.1 39.88 2.37 4.56 2.68 2.33 2.38 6.96 8.54 2.29 2.33 2.08 1.67 4.84 2.11 2.60 3.45 14.20 2.11 4.11 1.38 2.96 2.33 5.85 25 998.67 2.1 39.47 4.12 2.97 20 993.35 2.80 1.47 14.27 4.01 2.05 2.07 1.62 2.62 3.41 2.39 2.13 2.87 1.77 30 1001 39.46 2.25 2.12 8.25 2.21 3.48 14.7 39.55 2.47 2.44 2.35 3.75 1.56 2.37 8.41 4.22 3.91 3.21 2.99 1.57 4.74 3.07 2.04 8.42 2.44 2.49 13.56 3.36 3.48 (Contd.59 2.40 2.31 3.20 3.65 2.77 3.62 2.36 2.40 2.) .33 3.25 8.94 3.28 3.23 2.95 2.29 2.60 2.47 3.87 2.15 3.80 2.91 2.68 2.89 2.08 15 984.40 3.15 2.64 2.22 2.16 3.02 2.72 2.17 4.09 3.18 2.64 2.51 2.57 5.78 2.62 2.86 2.22 2.72 2.42 3.88 1.98 4.00 1.21 2.74 2.84 3.18 3.08 4.30 2.41 14.94 1.55 2.49 2.77 2.07 4.51 3.41 6.03 2.08 8.06 2.92 4.47 2.81 2.75 6.67 2.18 2.46 6.44 2.34 8.46 2.96 2.01 2.33 2.32 6.30 2.93 2.76 2.15 2.06 2.64 50 1008 39.99 1.28 3.47 3.57 2.81 3.83 1.12 12 976.46 14.87 2.79 6.59 100 1013 39.14 2.32 2.14 4.57 2.69 2.32 3.72 2.43 3.17 8.71 4.53 2.35 2.50 2.18 5.82 2.74 1.96 2.31 3.51 2.95 2.79 2.12 2.87 3.01 1.84 2.27 2.76 2.50 2.41 14.46 14.43 14.52 5.23 3.17 2.92 1.66 1.24 3.67 2.24 2.52 3.9 39.17 2.68 2.97 1.57 2.59 2.29 2.20 3.26 2.332 n1 n2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 30 40 50 100 Appendix A 11 973.0 39.27 2. 41 5.21 6.19 2.58 4.45 3.19 4.22 4.05 7.02 7.39 5.71 15.56 10.42 5.71 3.76 3.10 4.06 5.06 4.00 30.20 16.63 4.49 14.04 9.62 4.20 3.82 8 5981 99.06 9.62 5.72 5.66 10.18 8.10 6.79 3.36 6.06 4.20 4.64 3.82 4.65 8.82 4.43 3.01 5.22 3.69 9 6022 99.94 3.70 6.70 3.10 4.85 5.68 3.n2 df ) 6 7 5928 99.87 6.86 8.53 8.35 4.18 5.98 2.07 2.51 3.25 4.26 4.77 7.46 8.21 3.89 3.68 4.31 4.10 8.67 8.70 2.51 5 5764 99.59 6.63 3.82 18.74 5.90 2 5000 99.66 5.54 3.3 PERCENTAGE POINTS OF THE (n1 = Numerator df and n2 F DISTRIBUTION: = denominator F0.99 6.60 3.n1.03 3.67 5.95 5.29 8.55 8.87 4.99 2.82 7.94 4.26 6.56 7.98 10.64 4.24 15.56 5.78 3.67 14.21 10.01.14 4.21 5859 99.35 14.77 3.63 6.89 4.04 3.30 3.55 6.89 2.39 27.17 4.17 3.94 3.26 3.03 5.30 3.01 3.46 4.76 4.86 4.80 3.17 2.20 4.65 9.83 3.17 6.39 9.17 29.88 7.39 5.36 27.91 5.46 16.97 8.85 3.77 4.29 3.13 2.26 4.18 4.35 3.57 5.23 14.94 3.63 3.32 4.07 4.59 3.75 7.33 9.64 5.99 .98 6.92 9.26 10.69 4.51 6.04 4.03 3.26 13.33 27.72 4.00 13.93 3.41 3.46 3.95 7.75 12.56 3.40 3.22 5.99 3.50 34.78 2.89 3.37 3.02 3.89 4.80 10.Appendix A 333 A.99 5.67 4.40 8.56 4.44 4.11 6.29 5.72 5.00 3.45 7.99 6.32 3.31 4.12 21.70 3.44 4.80 5.12 3.47 7.55 10.85 4.50 4.47 5.18 5.98 4 5625 99.84 3.01 6.30 28.94 4.87 3.31 7.80 2.84 6.98 11.30 4.52 3.89 2.90 3.68 8.93 5.09 5.10 3.39 4.31 3.59 10 6056 99.51 3.42 5.61 5.37 5.69 12.51 3.18 5.37 4.02 2.50 (Contd.16 7.37 27.41 3.51 4.19 6.29 8.50 3.50 4.72 3.78 5.93 6.46 3.25 11.28 4.81 5.43 4.34 4.15 7.52 10.47 3.40 27.32 5.54 4.06 4.82 3 5403 99.01 4.67 3.46 6.26 3.27 10.81 3.61 5.02 7.78 8.07 8.85 7.59 3.21 5.69 3.14 4.71 3.23 6.36 3.25 28.91 15.) n1 n2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 30 40 50 100 1 4052 98.56 7.30 4.74 4. 64 2.23 3.91 5.14 5.25 2.78 2.52 4.86 3.77 4.07 4.43 3.13 6.84 2.16 3.37 3.89 7.33 2.45 2.26 4.21 3.72 2.89 40 6287 99.69 14.66 3.98 2.334 n1 n2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 30 40 50 100 Appendix A 11 6083 99.93 2.08 2.79 2.51 3.07 25 6240 99.75 9.29 2.60 .50 13.13 1.42 27.73 3.26 3.44 2.91 2.03 2.82 1.00 2.35 13.52 2.74 2.72 7.86 3.22 3.84 2.24 7.31 3.65 4.55 2.98 2.41 3.25 3.58 13.45 26.72 6.70 3.55 3.49 26.67 2.40 4.20 9.07 3.05 14.37 9.41 13.91 9.48 2.56 2.49 2.86 2.96 3.99 5.17 1.10 3.71 4.45 9.38 7.89 2.06 5.39 2.84 2.86 5.48 26.18 4.06 2.78 2.54 5.94 3.57 3.88 2.70 2.07 2.27 2.43 12 6106 99.96 7.71 2.53 2.86 3.16 3.68 2.75 4.78 2.82 3.27 3.06 1.30 2.58 9.02 3.01 1.37 3.63 2.25 4.31 5.62 3.81 3.15 3.43 26.46 4.84 9.13 14.27 2.43 3.58 2.12 3.87 14.03 2.35 3.47 3.56 6.79 6.09 5.42 2.83 2.64 2.69 2.67 3.58 2.69 2.27 3.14 3.30 6.29 7.67 5.96 4.54 2.37 15 6157 99.24 13.60 2.16 5.91 2.38 3.41 3.02 9.85 2.56 4.37 2.80 50 6303 99.94 2.45 2.57 4.41 4.51 3.96 4.54 2.00 2.36 4.99 2.60 2.71 4.99 5.47 26.97 2.11 4.80 3.31 4.02 2.47 5.71 3.36 3.09 3.23 3.64 2.76 3.16 3.66 2.18 3.08 3.95 1.40 2.52 3.57 3.46 26.10 1.69 9.52 3.40 6.92 2.70 2.29 3.52 4.84 2.62 3.45 7.10 3.22 4.13 3.55 7.20 4.48 2.46 3.73 2.74 100 6334 99.41 4.20 2.54 2.11 2.09 3.22 20 6209 99.76 2.62 2.12 3.01 3.76 2.73 5.73 2.58 2.98 2.28 3.81 4.30 3.17 3.23 5.87 2.42 2.94 1.37 2.66 3.41 27.97 30 6261 99.47 26.01 3.92 2.24 3.12 4.11 2.17 3.01 3. 58 4.73 6.76 5.61 3.11 5.75 4.1 6.11 4.20 4.40 3.05 4.93 3.12 5.11 4.02 4.83 4.74 4.25 5.48 3.65 3.29 9 47.33 5.92 2.09 4.00 3.30 4.08 5.17 5.63 4.40 5.68 4.63 5.34 3.95 4.46 5.2 10.65 4.83 6.15 3.3 13.60 6.96 4.31 5.90 3.56 4.04 5.72 4.55 4.54 9.49 4.47 4.04 3.05 4.4 13.49 5.31 4.89 5.47 7.77 3.04 3.32 5.17 8 45.1 12.96 2.67 5.20 3.06 6.98 2.60 5. p f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 24 30 40 60 120 ¥ 2 18.91 4.74 4.40 5.1 13.20 5.34 4.52 4.42 4.83 5.03 7 43.58 6.67 3.39 4.35 5.03 4.46 4.05(p.24 4.24 4.80 2.80 6.89 2.11 3.10 4.51 6.26 4.69 3.68 4.19 5.60 4.53 4.59 3.00 2.62 3.77 5.88 5.98 3.90 4.70 4.89 7.99 5.04 6.7 8.03 8.77 3 26.90 4.12 5.86 2.80 5.82 4.64 4.01 4.46 4.18 7.47 .76 4.77 4.32 5.62 4.33 4.94 4.23 4.13 5.26 3.69 4.79 4.60 5.92 3.4 PERCENTAGE POINTS OF STUDENTIZED RANGE STATISTIC q0.36 4.54 4.85 7.79 3.66 4.44 4.31 5.99 9.28 5.47 4.95 2.89 4.70 3.99 4.00 4.28 4.02 4.5 11.03 4.39 10 49.36 3.16 4.50 3.74 5.46 3.08 4.12 5.35 6.30 5.45 4.03 3.95 3.81 4.49 6.97 2.06 4.92 5.26 4.59 4.64 3.53 3.08 3.83 4.86 6 40.15 4.83 2.01 3.27 5.35 5.03 5.16 4.78 4.73 3.63 5 37.99 6.88 4.82 3.37 4.20 5.51 4.37 4.17 4.73 8.46 7.52 4.15 5.74 3.06 3.32 4 32.99 4.84 3.59 5.8 9.88 3.83 4.86 4.90 4.58 3.64 4.22 4.24 5.60 5.44 3.96 3.67 4.92 4.15 5.31 4.41 4.98 3.Appendix A 335 f) A.43 8.07 5.60 4.56 4.34 4.92 4.43 5.80 6. 6 8.35 6.67 6.49 7.97 5.72 5.17 7.99 9 237 30.73 5.32 8.33 5.30 5.16 .9 9.35 6.96 3.38 5.68 7.10 8.04 4.61 7.67 6.66 6.54 4.89 3.41 6.19 5.11 4.25 6.3 10.7 16.63 6.27 5.48 4.68 7.6 14.01 6.08 6.60 5.32 5.24 9.12 4 164 22.99 5.56 7.40 5.78 4.0 10.01 5.67 6.62 5.09 5.32 7.54 5.97 8.81 5.19 6.37 7.6 8.80 7.55 5.97 7.6 11.40 5 186 24.14 5.40 5. p f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 24 30 40 60 120 ¥ 2 90.31 6.91 6.70 4.26 4.65 5.33 5.24 4.20 6.54 6.0 14.01 4.01(p.36 5.3 9.56 5.26 6.92 5.05 4.83 4.64 4.70 5.12 6.12 4.98 5.93 4.25 5.32 4.88 5.2 10.14 5.20 4.1 9.43 5.84 5.42 7.51 5.50 5.13 6.02 3.89 5.53 6.54 5.4 PERCENTAGE POINTS OF STUDENTIZED RANGE STATISTIC q0.25 5.94 5.39 5.29 5.96 4.10 5.97 6.05 6.22 6.89 4.16 6.32 6.13 4.44 6.99 4.51 6.60 4.63 5.96 8.05 5.10 4.08 5.21 5.87 7.26 6.05 4.76 5.71 4.24 6.17 7.49 5.7 16.21 4.70 3.91 4.37 6.14 6.64 3 135 19.5 9.80 5.79 5.09 5.60 4.51 5.67 6.3 12.2 9.76 3.5 15.50 5.84 5.70 4.24 5.82 3.66 5.43 6.336 Appendix A f) A.95 4.7 12.14 5.08 10 246 31.08 6.96 6.80 4.7 13.77 5.21 6.37 5.47 7.99 6.27 5.92 5.2 15.50 4.07 4.67 4.92 5.54 6.67 8.74 4.88 8 227 29.69 5.96 5.97 5.32 7.0 11.87 8.60 6 202 26.17 5.94 7.82 4.20 5.87 6.37 4.02 5.03 6.39 4.81 6.43 5.87 4.73 5.37 6.91 7.84 6.69 5.27 6.45 4.60 5.17 4.15 6.63 5.28 4.74 4.2 11.85 5.48 6.45 5.0 8.76 7 216 28.13 5.02 4. 47 3.44 3.89 2.48 3.83 3.09 4.61 3.48 3.11 3.33 3.0 6.80 2.02 3.68 3.09 4.09 4.36 3.26 3.41 3.48 3.26 3.52 3.47 3.41 3.83 3.0 6.68 3.50 4.09 4.25 3.37 3.15 3.47 3.47 3.77 3 18.95 2.56 3.39 3.47 3.25 3.39 3.47 3.61 3.47 r0.50 4.20 3.37 3.46 3.52 3.02 3.38 3.61 3.27 3.50 4.23 9 18.32 3.0 6.30 3.31 3.09 4.67 .47 3.43 3.29 3.15 7 18.47 3.08 3.96 2.47 3.12 3.50 4.25 3.83 3.01 3.60 3.47 3.61 3.09 4.48 3.47 3.47 3.14 3.47 3.68 3.47 3.54 3.22 3.00 2.35 3.28 3.37 3.97 2.02 3.31 3.47 3.17 3.39 3.47 3.64 3.83 3.43 3.39 3.29 3.61 3.37 3.47 3.35 3.02 3.01 2.61 3.42 3.61 100 18.52 3.02 3.12 3.30 3.39 3.09 4.02 3.18 3.0 6.0 6.27 3.18 3.34 3.41 3.28 3.47 3.36 3.38 3.86 2.41 3.46 3.19 8 18.03 3.Appendix A 337 (p.48 3.19 3.35 3.18 3.38 3.47 3.34 3.42 3.47 3.36 3.41 3.52 3.35 3.0 6.45 3.83 3.68 3.30 3.47 3.38 3.52 3.47 3.55 3.47 3.0 6.35 3.31 3.32 3.74 3.05 50 18.09 4.12 3.47 3.53 3.40 3.48 3.26 10 18.02 3.48 3.44 3.56 3.41 3.0 6.46 3.35 3.47 3.50 4.0 6.35 3.13 3.20 3.33 3.09 4.79 3.5 SIGNIFICANT RANGES FOR DUNCAN’S MULTIPLE RANGE TEST p f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 30 40 60 100 ¥ 2 18.15 3.23 3.52 3.50 3.09 4.05 3.09 4.83 3.30 3.21 3.40 3.40 3.02 3.52 3.45 3.47 3.56 3.68 3.50 4.47 3.02 5 18.58 3.01 3.48 3.98 2.42 3.21 3.42 3.36 3.68 3.98 2.30 3. f) A.47 3.02 3.37 3.83 2.68 3.22 3.02 3.20 3.47 3.37 3.0 6.0 6.42 3.23 3.50 4.56 3.48 3.50 4.44 3.34 3.92 4 18.58 3.33 3.46 3.47 3.44 3.08 3.29 20 18.16 3.11 3.47 3.22 3.56 3.32 3.83 3.83 3.50 4.68 3.40 3.93 3.33 3.47 3.56 3.47 3.33 3.44 3.50 3.27 3.09 6 18.0 6.56 3.48 3.95 2.09 4.24 3.68 3.39 3.47 3.04 3.47 3.47 3.47 3.50 4.52 3.43 3.61 3.46 3.26 3.10 3.83 3.47 3.50 4.53 3.64 3.48 3.10 3.47 3.43 3.06 3.27 3. 12 5.01 A.95 5.20 20 90.00 4.55 4.88 5.75 4.7 5.76 3.30 4.26 6.21 4.59 4.56 4.79 4.06 4.31 4.16 4.91 4.8 5.74 4.8 5.8 5.07 4.51 5.77 4.50 4.08 4.84 4.70 4.59 4.7 5.99 4.0 14.69 4.98 3.79 4.5 SIGNIFICANT RANGES FOR DUNCAN’S MULTIPLE RANGE TEST p f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 30 40 60 100 ¥ 2 90.85 4.69 4.8 5.47 5.86 3.74 4.48 4.8 7.61 4.73 4.06 3.68 .51 5.26 5.92 4.62 4.3 6.0 5.04 7 90.69 4.96 4.58 4.84 4.33 5.81 4.0 9.40 5.11 5.3 7.65 4.35 4.36 5.0 14.53 4.14 4.78 4.0 8.48 4.98 6 90.05 4.95 4.85 4.26 5.68 4.45 4.55 4.0 8.63 4.67 4.3 7.22 5.32 5.64 4.64 4.58 4.3 6.07 5.36 4.44 6.94 4.17 4.0 8.86 4.22 4.94 4.8 6.8 6.26 5.72 4.0 8.64 4.81 5.50 4.0 9.13 5.71 3.53 4.84 4.17 10 90.23 4.7 5.21 4.0 14.89 4.00 4.82 4.66 4.41 (p.64 4.82 4.06 3.0 7.55 5.53 5.60 100 90.69 5.18 5.72 4.07 4.30 4.42 4.32 5.2 6.17 5.63 4.87 4.17 4.1 6.71 4.65 4.61 4.02 3.46 4.60 4.00 5.64 3 90.88 4.11 4.40 4.14 9 90.2 6.48 4.65 4.34 4.0 8.39 5.17 4.0 14.65 5.9 7.39 4.80 4 90.8 6.15 5.0 14.12 4.94 4.7 7.9 6.94 4.00 4.55 5.73 5.22 4.10 4.0 9.09 8 90.47 4.0 9.0 14.25 4.5 6.88 4.76 4.51 5.0 14.4 5.07 5.39 5.96 5.61 5.60 4.83 4.00 4.67 4.37 4.01 4.37 4.71 4.50 4.24 4.0 14.54 4. f) 50 90.82 3.32 4.89 4.5 6.71 4.73 5.5 6.5 6.6 6.24 5.3 7.45 4.8 5.94 4.338 Appendix A r0.27 4.24 4.55 5.45 5.66 4.10 4.40 5.37 5.15 5.07 5.89 4.0 5.27 4.68 4.0 14.0 14.26 5.15 5.53 4.34 4.20 5.63 4.68 4.28 5.3 6.0 8.79 4.70 5.48 4.39 5.43 4.9 7.0 9.24 4.41 4.92 3.73 4.5 6.25 5.0 6.82 4.17 4.15 5.0 8.0 5.77 4.0 7.5 5.34 4.98 4.41 4.96 4.56 4.06 4.99 3.32 4.38 4.0 14.33 4.23 5.90 5 90.79 4.26 4.41 4.89 3.76 4.13 4.86 4.29 4.85 4.3 6.0 14.1 6.02 4.03 3.0 5. 1 1 2 3 4 L8 Standard Array Trial no.A P P E N D I X B L4 Standard Array Trial no. 1 1 2 3 4 5 6 7 8 1 1 1 1 2 2 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 2 1 2 1 2 A B. 3 4 5 1 1 2 2 2 2 1 1 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 6 1 2 2 1 1 2 2 1 7 1 2 2 1 2 1 1 2 339 .1 TWO-LEVEL ORTHOGONAL ARRAYS Column no. 3 1 2 2 1 Column no. 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 3 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 4 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 5 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 Column no.340 Appendix B L12 Standard Array* Trial no. 6 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 7 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 8 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 9 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 10 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 11 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 12 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 13 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 14 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 15 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 . L16 Standard Array Trial no. 1 1 2 3 4 5 6 7 8 9 10 11 12 1 1 1 1 1 1 2 2 2 2 2 2 2 1 1 1 2 2 2 1 1 1 2 2 2 3 1 1 2 1 2 2 2 2 1 2 1 1 Column no. 4 5 6 1 1 2 2 1 2 2 1 2 1 2 1 1 1 2 2 2 1 1 2 2 1 1 2 1 2 1 1 2 2 1 2 2 1 2 1 7 1 2 1 2 1 2 2 2 1 1 1 2 8 1 2 1 2 2 1 2 1 2 2 1 1 9 1 2 2 1 1 2 1 1 2 2 1 2 10 1 2 2 1 2 1 2 1 1 1 2 2 11 1 2 2 2 1 1 1 2 1 2 2 1 *No specific interaction columns are available. 8 9 10 11 12 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 13 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 14 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 15 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 16 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 17 18 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 (Contd.) .Appendix B 341 L32 Standard Array Trial no. 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 4 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 5 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 6 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 7 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 Column no. 342 Appendix B L32 Standard Array (Contd. 23 24 25 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 1 2 1 2 2 1 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 26 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 27 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 1 2 2 1 1 2 2 1 28 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 29 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 30 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 31 1 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 1 2 2 1 2 1 1 2 .) Trial no. 19 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2 20 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 21 1 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 2 1 2 1 1 2 1 2 22 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 Column no. ) . 7 8 9 6 5 4 3 2 1 – – – – – – – – – – – – – – – – – – – – – – – – 9 10 11 12 13 14 15 – – – – – – – – – – – – – – – – – – – – – – – 8 11 10 13 12 15 14 1 – – – – – – – – – – – – – – – – – – – – – – 2 3 – – – – – – – – – – – – – – – – – – – – – – – – – – – – – 3 2 1 – – – – – – – – – – – – – – – – – – – – – – – – – – – – 4 5 6 7 – – – – – – – – – – – – – – – – – – – – – – – – – – – 5 4 7 6 1 – – – – – – – – – – – – – – – – – – – – – – – – – – 6 7 4 5 2 3 – – – – – – – – – – – – – – – – – – – – – – – – – 10 11 8 9 14 15 12 13 2 3 – – – – – – – – – – – – – – – – – – – – – 11 10 9 8 15 14 13 12 3 2 1 – – – – – – – – – – – – – – – – – – – – 12 13 14 15 8 9 10 11 4 5 6 7 – – – – – – – – – – – – – – – – – – – 13 12 15 14 9 8 11 10 5 4 7 6 1 – – – – – – – – – – – – – – – – – – 14 15 12 13 10 11 8 9 6 7 4 5 2 3 – – – – – – – – – – – – – – – – – 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 – – – – – – – – – – – – – – – – (Contd. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Column no.Appendix B 343 Two-level Interaction Table Column no. 22 23 24 23 20 21 18 19 16 17 30 31 28 29 26 27 24 25 6 7 4 5 2 3 – – – – – – – – – 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24 7 6 5 4 3 2 1 – – – – – – – – 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 8 9 10 11 12 13 14 15 – – – – – – – 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 – – – – – – – – – – – – – – – 17 16 19 18 21 20 23 22 25 24 27 26 29 28 31 30 1 – – – – – – – – – – – – – – 18 19 16 17 22 23 20 21 26 27 24 25 30 31 28 29 2 3 – – – – – – – – – – – – – 19 18 17 16 23 22 21 20 27 26 25 24 31 30 29 28 3 2 1 – – – – – – – – – – – – 20 21 22 23 16 17 18 19 28 29 30 31 24 25 26 27 4 5 6 7 – – – – – – – – – – – 21 20 23 22 17 16 19 18 29 28 31 30 25 24 27 26 5 4 7 6 1 – – – – – – – – – – 25 24 27 26 29 28 31 30 17 16 19 18 21 20 23 22 9 8 11 10 13 12 15 14 1 – – – – – – 26 27 24 25 30 31 28 29 18 19 16 17 22 23 20 21 10 11 8 9 14 15 12 13 2 3 – – – – – 27 26 25 24 31 30 29 28 19 18 17 16 23 22 21 20 11 10 9 8 15 14 13 12 3 2 1 – – – – 28 29 30 31 24 25 26 27 20 21 22 23 16 17 18 19 12 13 14 15 8 9 10 11 4 5 6 7 – – – 29 28 31 30 25 24 27 26 21 20 23 22 17 16 19 18 13 12 15 14 9 8 11 10 5 4 7 6 1 – – 30 31 28 29 26 27 24 25 22 23 20 21 18 19 16 17 14 15 12 13 10 11 8 9 6 7 4 5 2 3 – 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 .344 Appendix B Two-level Interaction Table Column no. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Column no. 2 THREE–LEVEL ORTHOGONAL ARRAYS L9 Standard Array Trial no. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 Column no. 2 3 1 2 3 1 2 3 1 2 3 1 2 3 2 3 1 3 1 2 4 1 2 3 3 1 2 2 3 1 *Interaction between column 1 and 2 only allowed. . 1 1 2 3 4 5 6 7 8 9 L18 Standard Array* Trial no. 4 1 2 3 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 5 1 2 3 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 6 1 2 3 2 3 1 3 1 2 2 3 1 1 2 3 3 1 2 7 1 2 3 3 1 2 2 3 1 2 3 1 3 1 2 1 2 3 8 1 2 3 3 1 2 3 1 2 1 2 3 2 3 1 2 3 1 1 1 1 2 2 2 3 3 3 Column no.Appendix B 345 B. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 2 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 3 1 1 1 2 2 2 3 3 3 2 2 2 3 3 3 1 1 1 3 3 3 1 1 1 2 2 2 4 1 1 1 2 2 2 3 3 3 3 3 3 1 1 1 2 2 2 2 2 2 3 3 3 1 1 1 5 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 6 1 2 3 1 2 3 1 2 3 2 3 1 2 3 1 2 3 1 3 1 2 3 1 2 3 1 2 7 1 2 3 1 2 3 1 2 3 3 1 2 3 1 2 3 1 2 2 3 1 2 3 1 2 3 1 8 1 2 3 2 3 1 3 1 2 1 2 3 2 3 1 3 1 2 1 2 3 2 3 1 3 1 2 9 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 1 2 3 3 1 2 1 2 3 2 3 1 10 1 2 3 2 3 1 3 1 2 3 1 2 1 2 3 2 3 1 2 3 1 3 1 2 1 2 3 11 1 2 3 3 1 2 2 3 1 1 2 3 3 1 2 2 3 1 1 2 3 3 1 2 2 3 1 12 1 2 3 3 1 2 2 3 1 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 1 2 3 13 1 2 3 3 1 2 2 3 1 3 1 2 2 3 1 1 2 3 2 3 1 1 2 3 3 1 2 .346 Appendix B L27 Standard Array Trial no. Orthogonal arrays and Linear graphs: Tools for Quality Engineering. . 6 7 8 5 7 9 12 10 11 8 13 1 7 – – – – – – – – – – – – – – 5 6 10 13 8 12 9 11 1 6 1 5 – – – – – – – – – – – – 9 10 5 11 7 12 6 13 2 11 4 13 3 12 – – – – – – – – – – 9 8 10 6 12 5 13 7 11 3 13 2 12 4 11 1 10 – – – – – – – – 10 8 9 7 13 6 11 5 12 4 12 3 11 2 13 1 9 1 8 – – – – – – 11 12 13 5 8 6 10 7 9 2 8 3 10 4 9 2 5 4 7 3 6 – – – – 12 11 13 6 9 7 8 5 10 4 10 2 9 3 8 3 7 2 6 4 5 1 13 – – 13 11 12 7 10 5 9 6 8 3 9 4 8 2 10 4 6 3 5 2 7 1 12 1 11 * Source: Taguchi and Konishi. ASI Press. 1987.Appendix B 347 Three-level Interaction Table (Does not apply to L18) Column no. 2 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 3 4 – – – – – – – – – – – – – – – – – – – – – – 3 2 4 1 4 – – – – – – – – – – – – – – – – – – – – 4 2 3 1 3 1 2 – – – – – – – – – – – – – – – – – – 5 6 7 8 11 9 13 10 12 – – – – – – – – – – – – – – – – Column no. 2 Standard linear graph for L16 OA. 1987.A P P E N D I X C 3 2 Linear graph for L4 Linear Graphs* 1 3 3 5 7 2 6 (a) 4 Linear graph for L8 (b) 1 6 7 5 4 2 1 FIGURE C. Orthogonal Arrays and Linear Graphs: Tools for Quality Engineering. ASI.1 1 3 13 2 6 5 10 14 9 11 7 8 15 12 Standard linear graphs for L4 and L8 OAs. *Taguchi and Konishi. 348 . 15 14 13 1 7 8 9 11 5 2 10 6 7 3 1 13 9 8 12 3 4 12 (a) 2 6 (b) 1 4 4 10 (c) 11 15 14 2 5 3 1 15 5 7 6 8 9 10 11 13 12 15 14 (f) 1 3 7 2 6 14 3 12 2 8 10 (e) 9 11 12 15 14 13 4 7 4 6 (d) 13 9 11 8 10 4 5 5 FIGURE C. Press. 1013 P.6 11.5350 –14.9485 –8.3507 –12.9053 –8.4051 –15.1558 –18.5 2.5909 –9.9496 –14. % 9.9 7.5 5.3996 –9.8798 –12.3 4.5080 –11.0924 –15.4792 –16.5 0.0 5.1245 –9.9810 –25.1 3.4 9.6398 –9.3 9.7 1.0 1.2678 –12.0 3.9 db –11.1 0.9498 –11.2 4.2817 –16.9166 –13.7 4.3684 –17.0 2.6080 –12.8 7.2 3.1 8.0274 P.2645 –10.1 9.0480 –9.1676 –11.4 7. % 0.6903 –13.1062 –12.5812 –13.9342 –20.5185 –20.2679 –13.7 7.0 9.3 11.5675 –15.4468 –9.7892 –9.4775 –18.4903 –10.7 0.1 1.9 8.1 4.4 6.8 0.8400 –9.8 3.3200 –10.5380 –19.7777 –8.6943 (Contd.9957 –26.8037 –18.0 8.6 10.8 10.9020 –16.9106 –15.3010 –11.3701 –13.4 8.8 9.2147 –9.9885 –22.1924 –21.3 0.6 4.5 4.8 5.3 2.0376 –10.4350 –12. % 3.2095 –10.0 11.8888 –17.9 db –10.6 7.9  p/100  * :(db) = 10 log   1  p/100  .6 0.6210 –17.1551 –10.4 4.8199 –8.9 4.1022 –11.2482 Omega Conversion Table* P.5790 –11.4 3.3762 –10.7359 –8.3063 –9.7 6.0700 –12.7 3.2 0.5 8.3691 –11.4046 –14.6 5.5483 –10.8073 –14.8 1.6 2.5 11.6691 –14.1 6.7981 –11.1542 –14.1292 –16.9738 –10.1864 –12.6 3.2 10.5 3.4 10.6892 –9.2 6.8 8.9 2.9740 –12.1734 –17.6 6.4 5.4183 –19.6 1.3527 –9.9 5.2 2.6856 –16.6 8.3 8.8912 –9.8 6.4744 –13.2157 –23.2 8.5 9.2777 –14.7240 –11.1 11.7 5.6970 –12.9430 –9.9620 –22.2603 –9.6510 –11.8 11.3 6.0 6.8486 –10.7264 –10.5 1.8 4.4381 –11.9 10.0 7.9952 –9.9 11.2 1.7 11.0 4.7 10.5 7.7390 –9.6070 –10.3 5.4 1.0339 –13.9919 –8.7 9.6 9.8021 –13.3 10.7875 –12.) 349 P.3 7.2 9.2 11.5 10.7871 –10.5 6.8734 –11.A P P E N D I X D db ¥ –29.1 2.1 10.8 2.4329 –10.9108 –10.3 1.9564 –19.5207 –12.2 5.0965 –14.2 7.0 10.7 8.0358 –8.4 2.2338 –11.7 2.4 11.0800 –9.7359 –15.6663 –10.9 1. % 6.0 0.1679 –13.1 7.4944 –9.8625 –8.3 3.1694 –9.1 5.5424 –9.4 0.9 db –15. 9 19.7342 –6.350 P.9929 –7.7 18.5 16.7 26.5730 –5.3 12.0736 –7.5878 –4.2558 –8.3 27.7 27.4399 –6.1853 –6.2 16.9 db –6.3 22.0299 P.3 21.8177 –4.2973 –6.1417 –8.2 17.0538 –5.4 13.5674 –7.7 21.3 17.0 25.6445 –6.7481 –4.2756 –4.0420 –7.6760 –5.3 20.1 21.8 20.9481 –6.2537 –4.5652 –4.4 12.5 13.5221 –5.2 15.6 27.9 23.5 17.5 19.0 17.8 21.0 24.6 17.9128 –5.9665 –5.5 22.7020 –4.7019 –5.1 16.4976 –4.2 14.0 13.7 12.0 14.8250 –6.2 18.1 23.4 27.8 19.8 13.7803 –5.6018 –7.5 21.3331 –8.2340 P.9 db –5.8595 –5.5475 –5.1373 –7.9563 –7. % 12.0 16.7 20.5 20.3985 –7.8 25.7 19.1501 –5.0 18.6 25.1 17.4687 –6.5 15.8066 –5.7250 –4.5 18.2 23.2722 –5.2 19.4993 –7.9 14.6791 –4.8644 –4.1259 –5.4 25.0 15.6 19.3322 –7.1 12.3 16.6501 –5.9 25.8 26.8330 –5.5561 –6.0 26.8 27.4 16.0106 –6.5 25.3 13.1 15.9 15.4655 –7.8556 –6.0 27.4 19.6 18.2476 –5.9 Appendix D db –8.9 27.1883 –4.0 23.7411 –7.9 21.4527 –4.1987 –5.1018 –5.3 19.2 20.5309 –8.8477 .8837 –7.1449 –4.8 23.3638 –4.6 13.1025 –6.7 23.8 17.7643 –6.3540 –6.2 13.0778 –5.5268 –6.2319 –4.2 26.9822 –4.1 25.8878 –4.3721 –8.2411 –6.6530 –8.3 14. % 20.1795 –8.2666 –7.9113 –4.6743 –6.9 17.7 22.7541 –5.6 26.7 16.3 24.0297 –7.4510 –8.9199 –7.3 26.6243 –5. % 24.4212 –5.9 13.7 17.1054 –7.2993 –7.2 22.2968 –5.4 23.5 23.2 24.1300 –6.6 22.4081 –4.3417 –4.6149 –6.1233 (Contd.4977 –6.8 22.4319 –7.9 db –7.0 22.6 15.6711 –7.5426 –4.1041 –8.) .8 18.5 27.3653 –7.6120 –8.2 25.7060 –7.3196 –4.8 24.0668 –8.9585 –4.7 25.0 19.0206 –5.7946 –6.6 23.0 21.1 24.1 13.5 12.7041 –6.6 24.6 14.5713 –8.4 17.3712 –5.4111 –6.6 21.7279 –5.9 22.2175 –8.4 20.1 22.1744 –5.4 18.1576 –6.5 14.4751 –4.4114 –8.1 14.2016 –7.1 19.9171 –6.1 27.6 20.5 24.1 20.5 26.3 23.7 14.4 26.2976 –4.4908 –8.4 15.5333 –7.9396 –5.3256 –6.2943 –8.6 16.0 12.1694 –7.8863 –6.3961 –5.2132 –6.7712 –4.2 27.8410 –4.7 15.6363 –7.4715 –5.6562 –4.7 24. % 16.4304 –4.4 22.8 15.8861 –5.3 15.0751 –6.7764 –7.3 18.7 13.0 20.3825 –6.9 26.2231 –5.8 16.2692 –6.6 12.8120 –7.2 12.4 24.4967 –5.6333 –4.3 25.–7.3463 –5.9935 –5.4 14.1 18.2101 –4.9349 –4.0060 –4.5200 –4.3859 –4.3215 –5.2 21.9793 –6.9 18.8 12.4463 –5.1666 –4.4 21.0478 P.8 14.1 26.6106 –4.5854 –6.5986 –5.7944 –4. 7212 –3.5933 –2.9 db –3.8 39. % 32.4 32.5 40.0168 –2.6 38.8227 –2.7 43.9 –1.7 39.4 30.2063 –1.2950 –1.0 42.2185 –2.9973 –2.2 28.9 31.6503 –2.2773 –1. % 28.0159 –3.6 32.4141 –3.3537 –3.0 43.5 28.4611 –2.2595 –1.6 37.8420 –2.0826 –1.2 35.4 29.0158 –1.3487 –2.1 34.4546 –3.8 38.8 34.0 38.6706 –1.5974 –3.1815 –2.1 36.7 37.1150 –3.7971 –1.3 36.3738 –3.2241 –1.4 40.9521 –3.6346 –1.9000 –2.1710 –1.1630 –2.9792 –1.0364 –3.9388 –2.4423 –2.4018 –1.4 42.7790 P.5176 P.5365 –2.0953 –3.2556 –2.5 42.4732 –1.Appendix D 351 db P.5 31.0 30.6 43.5 38.2 34.5986 –1.1 37.9 35.8 30.1 42.7067 –1.4048 –2.) .4911 –1.7 42.2536 –3.1347 –3.2 36.7 32.1261 –2.4 31.3 35.5448 –1.6591 –3.0756 –3.5627 –1.0 29.8 33.3 32.8613 –2.9 42.3939 –3.6798 –3.8046 –3.1017 –4.7 29.9609 –1.6123 –2.9583 –2.8 36.8 43.6886 –1.1 29.0588 –4.6 39.7 40.9 43.2 31.9062 –1.2 30.3 38.4196 –1.2928 –2.0341 –2.5 41.2139 –3.8675 –3.7 31.7628 –3.6 42.5 30.5 32.4952 –3.9 34.5 37.5807 –1.1 35.7837 –3.7842 –2.4 34.7076 –2.0 34.9 39.7650 –2.4343 –3.3136 –3.2742 –2.2 33.2936 P.6694 –2.4 33.2 41.9 30.5 29.2736 –3.9 41.6166 –1.1 31.1 32.7420 –3.3840 –1.8 40.8334 –1.1 41.6885 –2.1 28.7458 –2.2 32.0 33.1940 –3.8886 –3.2370 –2.3 43.9 38.5554 –2.8035 –2.7248 –1.1003 –1.9946 –3.1356 –1.6 28.2 42.3 30.8880 –1.0 28.9244 –1.4 37.8807 –2.8285 –3.9427 –1.3861 –2.7 33.6 34.2337 –3.0802 –4.5090 –1.5156 –3.3 28.2000 –2.0 35.9309 –3.3 37.7267 –2.5 33.9 db –4.0 31.0525 –2.7 28.8698 –1.6 36.2418 –1.1179 –1.4749 –3.6 33.5359 –3.6385 –3.6313 –2.3 41.2 38.1 38.5 39.4 39.0 37.6526 –1.7005 –3.8465 –3.1544 –3.9 29.1533 –1. % 40.6 35.3 33.8 41.7609 –1.9097 –3.0373 –4.3 39.0 36.3 40.1 43.8 29.0560 –3.8 37.4 43.7 41.2 29.3336 –3.4553 –1.7 38.0709 –2.8153 –1.2 37.8 35.5564 –3.1077 –2.2 40.2 43.9194 –2.6 29.6 40.3300 –2.1 40.4 41.3 31.8 31.3662 –1.7 30.3674 –2.9778 –2.4236 –2.4 35.6 30.5 36.1445 –2.9733 –3.3114 –2.8 28.3 42.4375 –1.7428 –1.1742 –3.0650 (Contd.6 41.1 33.9 37.3306 –1.9 db –2.4 28.0 39.3484 –1.1 30.5 35.3128 –1. % 36.8 32.8 42.4 36.6179 –3.5269 –1.5 34.1 39.3 34.8516 –1.5768 –3.2 39.9975 –1.6 31.7 34.0 41.0 32.7 35.0 40.0893 –2.7 36.4 38.5 43.9 33.5744 –2.4988 –2.1886 –1.3 29.4799 –2. 2433 –0.5392 0.2 47.4695 –0.2 46.9 47.4732 1.2418 1.9 Appendix D db –1.2 57.2 58.8 51.1 44.3662 1.3 46. % 48.8 58.2085 0.9 51.1 47.8014 –0.5 57.0474 1.9 55.3 47.3306 1.0 51.8 49.1 49.3 48.2 49.5 47.3 54.3650 0.8 57.9242 –0.3 55.0 55.0347 –0.9066 0.5 45.4521 –0.0 58.5 48.0650 1.6 45.9417 –0.5 51.3476 0.6614 –0.5 56. % 52.7 45.4347 –0.5916 0.8 44.5567 –0.1 54.8 48.7 47.5 50.2241 1.0000 0.7313 –0.6886 1.5916 –0.9 50.0 44.4 44.0521 –0.6614 0.7428 (Contd.8189 –0.352 P.5392 –0.6964 –0.1 55.1 53.7 55.4 57.2 44.6 49.8 52.5741 –0.6 50.0826 1.1179 1.7313 0.5 49.6 47.0 53.1564 –0.1 45.6526 1.6265 –0.4 53.4172 –0.4 51.5043 0.7 50.7138 0.3998 –0.2085 –0.1911 –0.0297 –1.6 48.3650 P.1356 1.2595 1.2780 –0.8 55.1 57.4 54.2 45.8891 –0.0174 0.3 52.8364 –0.3 58.3484 1.5 54.2954 –0.7138 –0.5 59.0 56.1 51.6439 –0.8 56. % 44.9593 0.0121 1.6 55.7839 0.1 56.6 58.2063 1.8014 0.6 54.4 52.5 55.8189 0.4347 0.9242 0.7 44.1390 –0.7 59.1042 0.0174 0.4553 1.6964 0.2950 1.6166 1.2 48.5741 0.1003 1.6 57.8 47.9 db 0.1533 1.) .5627 1.7488 –0.1 58.2 59.5 46.4695 0.7488 0.7 51.5 58.5269 1.3998 0.8540 0.8540 –0.3 45.5448 1.8 50.1 50.0 46.7 56.3 49.9593 –0.0 48.0 57.2 51.3 50.5218 0.4 45.1 52.4869 0.2259 –0.0474 –1.3 56.6 56.5 53.4172 0.3128 1.3476 –0.7663 0.5 44.3302 P.8715 0.8 53.8 59.6265 0.2 52.6 52.3824 0.0 47.7 54.0 54.0 50.3302 –0.4911 1.9945 –0.2433 0.9 53.2773 1.7 52.8 54.6 53.4 56.4375 1.6 46.9066 –0.2 56.9417 0.1 48.5218 –0.7 53.6789 –0.1911 0.0 52.4869 –0.0347 0.1216 0.6090 0.7 48.9 59.9 db 1.6439 0.9945 1.7 49.0521 0.7 57.2 53.0 59.2607 0.1042 –0.1564 0.1328 0.8 45.1886 1.7248 1.6346 1.6789 0.5043 –0.9 45.2 55.9 54.3824 –0.9769 0.2607 –0.4 55.8715 –0.0869 0.9 58.3128 –0.1737 –0.0121 –0.4 50.6706 1.8891 0.2780 0.4 59.7 58.4 46.0 45.7663 –0.7839 –0.3 53.8364 0.3 51.9 46.3840 1.3 59.9769 –0.1 59.1390 0.4018 1.7 46.6 44.4 58.5 52. % 56.4521 0.1216 –0.0297 P.9 57.6090 –0.3 57.0695 –0.1710 1.7067 1.0 49.4 48.6 51.4 47.4 49.0695 0.2 54.1 46.1737 0.8 46.3 44.0869 –0.5090 1.5807 1.2259 0.4196 1.9 49.5986 1.2 50.5567 0.6 59.2954 0.9 db –0. 1544 3.8177 4.7 70.3 60.6385 3.6 68.4 71.6313 2.5 73.0953 3.8 71.1 68.3 75.5974 3.6 65.8807 2.9 74.8 72.9946 4.1 64. % 72.2000 2.7 63.5652 4.0 60.8880 1.1 63.3300 2.5 70.6106 4.0560 3.8035 2.2370 2.9 71.9 61.7842 2.8886 3.9388 2.9822 (Contd.2736 3.3 71.8046 3.2 64.7 73.1 74.0893 2.7 75.4527 4.3939 3.3 65.5768 3.9609 1.5 60.8 68.9 69.8 74.3417 4.5 67.3 61.4423 2.7 60.9 66.4799 P.9 db 3.8420 2.2 61.5 64.3 63.9244 1.8 69.1347 3.8334 1.4 65.2976 4.8153 1. % 64.7 69.6123 2.5 71.2 60.0 65.9 70.3114 2.6885 2.3674 2.1 71.6 62.6 60.8227 2.3 70.0 66.6694 2.9 65.0 73.9 db 1.0 62.4 68.2536 P.8410 4.7076 2.9194 2.2 72.7944 4.7 62.4 69.9097 3.1449 4.0168 3.8 64.2936 3.4 62.8 73.6 66.4236 2.3196 4.2537 4.2319 4.9 67.4 64.3 62.6 74.1 75.3859 4.0 61.1 70.0 70.7 61.4 60.6591 3.7837 3.0588 4.7005 3.5 65.6503 2.9583 2.2 68.8698 1.1 61.7609 1.0159 4.0 68.6791 4.1666 4.6 64.0802 P.9973 3.9521 3.0 63.7420 3.1 69.5 74.9 73.8 75.8516 1.7790 1.9 75.8 67.2 74.0 71.7 67.8 60.4 73.6 67.9309 3.0709 2.3 66.7 64.5 69.1742 3.4976 4.5 62.3638 4.3537 3.7 66.5564 3.1 62.9 .2928 2.4 70.1 60. % 60.7628 3.5426 4.1233 4.7 72.8613 2.5156 3.8878 4.3861 2.9113 4.7 71.4952 3.8 66.4 66.4 74.9 62.4304 4.5554 2.4 67.7 74.8644 4.7458 2.0 67.0364 3.3 68.1261 2.1940 3.0525 2.5359 3.2742 2.6 61.2756 4.1017 4.2 66.3 72.7712 4.9733 3.2 71.1150 3.7250 4.8 61.3738 3.9427 1.3 64.6 75.8 65.6 63.9975 2.6 70.3 69.1445 2.9 db 2.4546 3.1815 2.7267 2.3136 3.2 75.3 73.4343 3.5200 4.9349 4.5933 2.2 63. % 68.3 74.2 73.4 61.0 72.9778 2.2 65.4 72.1630 2.0373 4.0 75.2337 3.4988 2.2139 3.4749 3.0 69.5 66.8465 3.0341 2.0 64.5176 2.1 72.5 61.4751 4.2 70.7481 4.7020 4.2 69.8255 3.8675 3.5878 4.9 63.7212 3.2556 2.6 71.7 65.7971 1.2 67.6179 3.9062 1.6 73.6798 3.7650 2.8 63.5 75.4 75.5 72.2185 2.3487 2.0756 3.2 62.0 74.4048 2.9792 1.Appendix D 353 db 4.1 66.4 63.4141 3.5744 2.1 67.4611 2.9585 4.0158 2.6 69.) P.5 68.9000 2.3336 3.6333 4.6562 4.5365 2.1883 4.6 72.5 63.3 67.2101 4.7 68.1 73.1077 2.8 62.4081 4.8 70.1 65. 8556 6.8837 7.9929 8.1245 9.6 90.2558 8.4 77.6 82.1 77.4510 8.7 85.7777 8.5 89.1 80.2 90.7 86.2993 7.9053 8.6 81.3 85.9396 5.5 77.0538 5.3256 6.9 Appendix D db 5.7019 5.4 76.1987 5.2 76.0060 5.2645 10.4977 6.9 db 6.7041 6.7060 7.6 84.5 78.1694 P.0 91.5 80.9 90.8 89.0480 10.3 91.8 84.8861 5.3 83.2 83.4903 10.2476 5.8477 7.7541 5.7342 6.2095 10. % 80.3961 5.7892 9.0420 7.8 88.5561 6.6243 5.7 78.3 80.4687 6.9 81.4463 5.5909 9.4908 8.354 P.0 87.5333 7.2943 8.0 88.1 76.7 84.2 82.5 88.2 89.) .4 84.8 76.8 77.6 91.0 83.3527 9.4 88.4 82.5 82.5 85.7 77.1417 8.2 84.1501 5.1373 7.2340 7.8 85.3 86.8120 7.6445 6.7764 7.9665 5.4329 10.7411 7.5713 8.7803 5.7359 8.3322 7.9919 9.0299 5.4 85.7 88.5 81.3 87.8 87.3 84.6363 7.0800 9.9 91.1 84.0668 8.6018 7.9481 6.3 90.4 83.2 86.4 89.3996 9.0 84.6943 8.7 76.0 78.5 87.3540 6.7643 6.4944 9.8 82.6 83.8 83.1 81.1 91.8 79.1576 6.8400 9.2603 9.3063 9.2 77.8250 6.7 87.7 80.9485 8.6 86.0736 7.7 82.3 81.9 87.4 90.5674 7.1744 5.2411 6.4 80.8 90.1 89.5 83.9793 7.6120 P.9171 6.5483 (Contd.4212 5.3825 6.4468 9.2 85.2 79.2666 7.7279 5.5424 9.4 87.5854 6.1 87.9563 7.5 84.6 88.1054 7.6530 8.4 78.3 78.0 82.3 76.3200 10.2692 6.1 90.6711 7.3712 5.8 86.4967 5.5 86.2 78.8330 5.5309 8.8912 9.2 88.5475 5.8066 5.9430 9.0 89.8 91.6892 9.6743 6.9 86.7 89.0 86.4993 7.7 81.9 77.6149 6.0 80.9 db 8.9 82.4 91. % 76.3653 7.9 85.2147 9.0 85.3215 5.0 79.7 79.6 89.2175 8.8 78.0106 7.3 77.9 79.5221 5.5 90.7 91.8 80.0751 6.1795 8.1041 8.5 79.9935 P.1 88.9 78.6760 5.8199 8.4111 6.2968 5.4715 5.5 76.0297 8.0778 5.6 87.5986 5.4399 6.4655 7.1694 9.6398 9.3 89.6 77.1853 6.3463 5.7 83.6 85.4 86.2016 7.0 76.9952 10.8 81. % 88.1 78.9 89.1300 6.0 77.1018 5.5730 5.3 82.6 76.6 80.6 78.2 91.2132 6.6 79.2 81.0 90.7 90.2973 6.1 82.5 91.3721 8.3 88.0206 6.7390 9.9 db 7.0 81.3985 7.3 79.8863 6.2 80.2231 5.1 83.4114 8.0478 6.2 87.1 86.8625 8.0358 9.4 79.1 85.9128 5.1 79.9199 7.8595 5.9 83.5268 6.6501 5.4 81.1013 10.2722 5. % 84.1259 5.4319 7.1025 6.3762 10.1551 10.3331 8.7946 6. 8037 19.7981 11.7359 15.6 93.9108 10.6070 10.1 97.7875 12.2 94.4 98. % 94.7 93.1 98.6856 P.0 97.2678 12.4 96.6970 12.4046 14.4381 11.0 94.5812 13.1558 19.7 99.5 98.5350 14.3684 17.7264 10.0700 13.6903 P. % 98.8486 10.2777 14.8073 14.0274 12.2 99.9810 29.9 93.6080 12.4 93.6 99.2679 13.4775 18.0376 11.7240 11.5 92.5 96.6663 10.6 98.7 94.1676 11.2482 15.1864 12.8 94.4 97.1062 12.9 db 11.1292 17.Appendix D 355 db P.3507 12.4 94.1 96.9 95.3 93.9 db 13.6510 11.5790 11.9885 23.6 96.9740 13.0 93.8 95.7 98.3 92.6 94.1 92.7 95.2 96.0339 14.2 93.8 92.2817 16.0965 15.1 95.9620 25.1 99.1924 22.1 93.5 94.8798 12.0 99.6 97.3 97.9564 20.0 16.3 98.4350 12.8 93.9 97.8888 18.3010 11.9020 17.1679 13.9496 15.5 99.4 99.0 96.3701 13.2 92.1542 14.4183 20.0924 16.5207 12.8 96.6691 14.5 97.7 96.9 100.1 94.5185 22.9 99.8 97.1734 18.6 95. % 92.3 99.0 98.5380 19.9 db 10.9738 11.4 95.3 94.7 92.3691 11.5080 11.2157 26.5 95.9342 21.2 97.5 93.8 98. % 96.9106 16.4744 13.7871 10.5675 15.4792 16.8021 13.0 95.2338 11.4051 15.3 95.9957 ¥ .7 97.6210 17.8734 P.8 99.3 96.2 98.0 92.9498 12.2 95.6 92.4 92.1022 11.9166 14. UK. and Michael Hamada.W. Quality by Design: Taguchi Techniques for Industrial Experimentation. 2002. 357 . 2002. 2004.. USA. Empirical Model Building and Response Surfaces..E. Box. India.P. K. Simultaneous Optimization of Multi-response Problems in the Taguchi Method Using Genetic Algorithm. W. UK.. F. Optimizing Multi-response Problem in the Taguchi Method by DEA-based Ranking Method. John Wiley & Sons.. Vol.M. AddisonWesley. R. Box. 2006. Vol. J. J. G.R. USA. Unpublished Ph. 32. and Behnken. of Advanced Manufacturing Technology. Agricultural and Medical Research. 2005.E. and Lin. 825–837. Vol.. Small Response Surface Designs. Shahabudeen. USA. MA. India. R. Oliver and Boyd. John Wiley & Sons. 26. Experimental Designs. 187–194. Multi-response Robust Design by Principal Component Analysis. Total Quality Management. Int. Anna University. Int. 2. Edinburg.E.References Belavendram. Hung-Chang and Yan-Kwang. Experiments: Planning.. D. Jeyapaul. 2000. pp. Goldberg. UK. India.P.. 1997. UK.D. Cochran. Prentice-Hall International. 4th ed.. N. Vol. 1960. pp. Statistical Tables for Biological. P. Vol. 1995. 409–416.. pp. and Draper. Thesis. 1331–1337.. Quality Management Research by Considering Multi-response Problems in Taguchi Method—a Review. 1987. of Quality and Reliability Management. 1990. Inc. Jeyapaul. N. Vol..G. 2nd ed. Analysis and Parameter Design Optimization.R.K. Inc. Fisher. New York. Jeff Wu. and Krishnaiah. and Cox. Chennai.. pp. G. pp. G. R. John Wiley & Sons. Draper.. Optimization and Machine Learning. D.J. 8. 1989. Chao-Ton Su and Lee-Ing Tong. Genetic Algorithms in Search. 870–878. C.F. pp. 19. of Advanced Manufacturing Technology. Technometrics. 455–476. D. UK. N.. J. Simultaneous Optimization of Multi-response Taguchi Experiments. 30. Some New Three Level Designs for the Study of Quantitative Variables. Int. . Technometrics.A. 1953. and Yates. Paul. Taguchi Techniques for Quality Engineering. and Tarng Y.. 10.L. Soil Sci. John Wiley & Sons. 1987.J.. Introductory Statistics. Optimization of EDM Process Based on the Orthogonal Array with Fuzzy Logic and Grey Relational Analysis Method. Tata McGraw-Hill. K. Lin. 2004. Comm. China. K. 25. Yates. India..S. 355–370. 2005. New York. 2002. 2008. 2003. J. 2005. Anna University. Imp. pp. 646–668. 19.. Yenlay.. Ross. Vol.. P. Tech. Quality Engineering Using Robust Design. Turkish Journal of Engineering Environment Science. Phadke. and Babu. Nishina. Panneerselvem.B.C.D Thesis. 2001. Mann. and Ko. Journal of Grey Systems. 35. New Delhi. J.C. D. Dorling Kindersley. New Delhi.. pp. J. Bur. P. Mayers. Chennai. India. Design and Analysis of Experiments. Ankara. India.S. 1937. 5th ed. T. Mathews. and Montgomery. UK. G. of Quality and Reliability Management. . A. D. 1995.. A comparison of the performance between a Genetic Algorithm and the Taguchi method over artificial problems. Lin. Lin.C.H. 1998. India. J. 16. 2nd ed. R. Vol..S.L.. pp. M.. John Wiley & Sons. 2004. India.. Design of Experiments with MINITAB. O. UK. Optimization of the Multi-response Process by the Taguchi Method with Grey Relational Analysis.. Response Surface Methodology..358 References Krishnaiah. India. Pearson Education. 561–568. 5th ed. Study and Modeling of Manual Lifting Capacity... Int. Vol.S. Prentice-Hall of India. P. F. Taguchi’s Methodology for Multi-response Optimization: A case study in the Indian plastic industry. Vol. 1998. Reddy. Int. C. UK. 271–277. John Wiley & Sons. Research Methodology. Montgomery. Turkey.L.. The design and analysis of factorial experiments. of Advanced Manufacturing Technology. R. pp. Unpublished Ph.S. 236 Algebraic signs to find contrasts. 170 Brainstorming. 60 Analysis of variance. 230 level. 142 2k–2 fractional design. 8 Confirmation experiment. 119 359 . 224 Automotive disc pads. 51 Approximate F-test. 54 Coefficient of determination (R2). 74 Better test strategies. 140 Cause and effect analysis. 105 Assignment of weights method. 200 Adjusted treatment totals. 37. 8. 156 structure. 30 randomized design. 143 23 design in two blocks. 229 for predicted mean. 144 23 factorial design. 174 Analysis of treatment means. 142 Block sum of squares. 9 Analysis of data from rsm designs. 49 Confidence coefficient. 156 Alternative hypothesis. 157 Consumer tolerance. 170.Index 22 design. 230 for the vonfirmation experiment. 60 normality. 136 Check for constant variance. 262 Comparisons of means using contrasts. 67 Complete confounding. 110 in two blocks. 57 Coded variables. 110 2k factorial designs. 36 diagram. 229 for a treatment mean. 142 2k design in four blocks. 133 Central composite design. 76 Adjustment factor. 145 randomization. 155 2k design. 115 Alias. 112. 224 analysis. 141 Blocking. 57 Additive cause-effect model. 188 Contour plot of the response. 116. 31. 175 Coding of the observations. 36 Abnormal response. 110 22 design in two blocks. 126 Combination method. 24 Block effect. 56. 113 Balanced incomplete block design. 140 Blocking in replicated designs. 229 Confounding. 121 23–1 design. 58 Check to determine the outliers. 8 interval. 152 Construction of one-half fraction. 302 Average effect of a factor. 150 2k factorial design. 162 Box–Behnken design. 27. 303 Center points. 274 Attribute data. 155 Degree of freedom. 51 deviation. 245 Interaction effect. 61 test. 6 Mean square. 61 Fixed effects model. 55 residuals. 35 Effect. 204 Live subscript. 73 Mixed effects model. 115 coefficient table. 86 Main effects plot. 290 Graeco–Latin square design. 278 Dead subscripts. 234 F-distribution. 27 Experimentation. 121 Contribution. 27 Design resolution. 154 Full factorial experiment. 25 Factor. 101 Fold over design. 190 Merging columns method. 224 Defining contrast. 170 model. 61 Gain in loss per part. 49. 92. 118 Mean of the sampling distribution. 157 Dispersion effects. 86 plot. 101 . 170 Fisher’s least significant difference (LSD) test. 203. 163 Fractional replication. 285 effects. 164 Contrast. 154. 26 Estimate the effects. 31. 177 Latin square design. 235 Conventional test strategies. 22 Corrected sum of squares. 27 analysis. 102 Defectives. 118 Lack of fit. 258 Duncan’s multiple range. 85 Main effect of a factor. 265 Independent variables. 101 Experimental design. 27 Effects model. 234 Dummy treatment. 73 Missing values in randomized block design. 155 Genetic algorithms. 176. 27 Linear graph. 126 unit.360 Index First-order experiment. 144 Defining relation. 57 Expected mean squares. 22 error. 77 Least-square method. 26. 27 Inner/Outer OA parameter design. 112 Factorial experiment. 51 Correction factor (CF). 175 Levels of a factor. 22 Idle column method. 54 variance. 39. 53 Data Envelopment Analysis based Ranking (DEAR) method. 129 probability plot. 113 Estimation of model parameters. 252 Generator. 27. 7 F-test. 280 Half-normal plot of effects. 102 Location effects. 24 Engineering judgment. 126. 124 Control factors. 85 Full fold-over. 33 of effects. 154 design. 257 Missing observation. 67 coefficient. 164 Fractional factorial. 25 Dependent. 80 Grey relational analysis. 50 Efficient test strategies. 25. 274 Error sum of squares. 156 Pure error. 10 approach. 24. 162 One-way or single-factor analysis of variance. 26 Randomized block design. 68 Outlier. 256 Multi-response problems. 237 One tail test. 56 Multi response performance index. 217 Prediction error sum of squares. 127 Principal component method. 41 Multiple responses. 27 graph. 281 Normalized S/N ratios. 3 Normal probability plot. 159 One-quarter fraction factorial designs. 175 Newman–Keuls test. 216 Population. 49 R2. 9 One-half fraction of 2k design. 140 Replication. 116. 238 Partial confounding. 58 p-value. 195 Quality loss function. 119 residuals. 283. 27 Qualitative factor. 38 model. 146 Percent contribution. 37. 275 Robust design. 273 Pooling of sum of squares. 29 Qualitative. 31 of effects. 126 Pooling down. 50 Optimization of flash butt welding process. 200. 27 Quantitative factor. 126 361 Natural variables. 98 complete block design. 188 Quantitative. 70 incomplete block design. 235 Normal distribution. 70. 125 Relationship between the natural and coded variable. 61 Noise factor. 44. 115. 155 One-half fraction of the 24–1 design. 211 surface. 127 Random effects model. 119 main effects. 234 Objective functions. 9 Number of levels. 9 Predicted optimum response. 44. 169 designs. 60 Paired t-test. 126 R2 adj. 18 Parameter design. 228. 3 Randomization. 57 Response. 213 method. 216 Pooling up. 121 Point estimate. 74 Reflection design. 10 Pair-wise comparison methods. 118 Plus–Minus table. 285 Principle fraction. 273 Multiple linear regression. 126 Pure sum of squares of error. 3 Power of the test. 170 methodology. 37. 70 Null hypothesis. 127 R2 pred . 216 . 164 Regression coefficients. 49 Quality loss. 116 Replicated designs. 7 Pooled error sum of squares. 236 Omega transformation. 26 Residuals. 288 Nuisance factor. 217 Plot of interaction effect.Index Model validation. 213 Predicting the optimum response. 274 Multi-level factor designs. 33 Normalization. 312 Optimum condition. 169 Reverse normalization. 127 R2 (full model). 190. 213 Orthogonal arrays (OAs). 154 One-half fraction of the 23 design. 70 block factorial design. 101 Random sample. 199 Orthogonal contrasts. 106 Rule for degrees of freedom. 133 Second-order experiment. 49 model.362 Index t-distribution. 211 Variable data with interactions. 297 Surface plot of response. 37 Simple repetition. 171 Rule for computing sum of squares. 6 Standard error (Se). 154. 6 Saturated designs. 9 Types of S/N ratios. 10 Two-class data. 25. 3 Sampling distribution. 3 mean. 14 Three-factor factorial experiment. 7. 119 System design. 27. 64 Standard normal distribution. 228 Treatment. 199 Unreplicated designs. 129 Single-factor experiments. 163 Scaling factors. 51 Transformation of percentage data. 69 Tests on a single mean. 49 Treatment (factor) sum of squares. 170 Selection of factors. 198. 115 Taguchi definition of quality. 200 Total variability. 55 Spherical CCD. 51 Wave soldering process. 140 factorial. 236 Rotatable design. 29 Signal factors. 3 variance. 3 size. 234 Test sheet. 31 standard deviation. 85 Type I error. 57 Statistically designed experiment. 4 Standardized residuals. 30 Single replicate of a 2k design. 187 Taguchi methods. 226 Two-factor factorial experiment. 11 Tests on two means. 113 Super critical fluid extraction process. 206 Testing contrasts. 61 Sum of squares for any effect (SSeffect). 29 Selection of levels. 7 Table of plus and minus signs. 236 Signal to noise (S/N) ratio. 234 Simple linear regression. 217 Variance. 92 Tolerance Ddesign. 105 Sample. 171 SSmodel. 129 Variable data. 124 . 125 Standard deviation of the sampling distribution. 317 Weighted S/N ratio. 52 Turkey’s test. 24 Studentized ranges. 291 Word. 9 Type II error. 236 Second order response surface model. 127 Standard error of mean. 62 Two tail test. 157 Yates algorithm. The text is divided into two parts—Part I (Design of Experiments) and Part II (Taguchi Methods). it moves on to describe randomized design. Janakiraman and R. SHAHABUDEEN. and then. PhD. Anna University.com 9 788120 345270 .APPLIED DESIGN OF EXPERIMENTS AND TAGUCHI METHODS K. In addition. Chennai. factor analysis and genetic algorithm. N Introduces simple graphical methods for reducing time taken to design and develop products. Besides. Dr. Krishnaiah • P. His areas of interests include discrete system simulation. Part II (Chapters 9–16) discusses Taguchi quality loss function. N THE AUTHORS K. Singhal ISBN:978-81-203-4527-0 ` 395. Krishnaiah has twenty-eight years of teaching experience and has published several papers in national and international journals. KRISHNAIAH.V.K. Gopal Total Quality Management. methods for multi-level factor designs. Graeco-Latin square design. University.00 www. P. 2k-m fractional factorial design and methodology of surface design. data analysis using response graph method/analysis of variance. Department of Industrial Engineering. This book covers the basic ideas. Shahabudeen Design of experiments (DOE) is an off-line quality assurance technique used to achieve best performance of products and processes. meta heuristics. Chennai. Dr. and Statistics. Latin square design. KEY FEATURES Includes six case studies of DOE in the context of different industry sectors. Chennai. terminology.N. P.phindia. He received his BE (Mechanical) and ME (Industrial) from S. His areas of interest include operations and production. Department of Industrial Engineering. and objective functions in robust design. Chennai. is Professor and Head. Divya Singhal and K. Part I (Chapters 1–8) begins with a discussion on the basics of statistics and fundamentals of experimental designs. Shahabudeen has published/ presented more than thirty papers in international journals/conferences. He received his BE (Mechanical) from Madurai University and ME (Industrial) from Anna University. and statistical quality control. N Provides essential DOE techniques for process improvement. object-oriented programming and operations research. the book explains the application of orthogonal arrays. and the application of techniques necessary to conduct a study using DOE. orthogonal design. B. PhD. design and analysis of experiments. the book would also be extremely useful for both academicians and practitioners. In addition. You may also be interested in Total Quality Management: Text and Cases. This book is intended as a text for the undergraduate students of Industrial Engineering and postgraduate students of Mechatronics Engineering. it also deals with a statistical model for two-factor and three-factor experiments and analyses 2k factorial. is former Professor and Head. Mukherjee Implementing ISO 9001:2000 Quality Management System: A Reference Guide.R. Anna University. Mechanical Engineering.
Copyright © 2024 DOKUMEN.SITE Inc.