GraphPad Prism Slides

May 12, 2018 | Author: VasincuAlexandru | Category: Statistical Power, Standard Deviation, Analysis Of Variance, Errors And Residuals, Effect Size


Comments



Description

Introduction to Statisticswith GraphPad Prism 7 Outline of the course • Power analysis with G*Power • Basic structure of a GraphPad Prism project • Analysis of qualitative data • Chi-square test • Analysis of quantitative data • t-test, ANOVA, correlation and curve fitting Power analysis • Definition of power: probability that a statistical test will reject a false null hypothesis (H0) when the alternative hypothesis (H1) is true. • Translation: statistical power is the likelihood that a test will detect an effect when there is an effect to be detected. • Main output of a power analysis: • Estimation of an appropriate sample size • Too big: waste of resources, • Too small: may miss the effect (p>0.05)+ waste of resources, • Grants: justification of sample size, • Publications: reviewers ask for power calculation evidence, • Home office: the 3 R: Replacement, Reduction and Refinement. Hypothesis Experimental design Choice of a Statistical test Power analysis: Sample size Experiment(s) Data exploration Statistical analysis of the results . Technical Biological n=1 n=3 .Experimental design Think stats!! • Translate the hypothesis into statistical questions: • What type of data? • What statistical test ? • What sample size? • Very important: Difference between technical and biological replicates. Power Analysis The power analysis depends on the relationship between 6 variables: • the difference of biological interest Effect size • the standard deviation • the significance level • the desired power of the experiment • the sample size • the alternative hypothesis (ie one or two-sided test) . : Cohen’s d = (Mean 1 – Mean 2)/Pooled SD . previous research. • How to determine it? • Substantive knowledge. the smaller the experiment will need to be to detect it. not statistically.g. pilot study … • In ‘power context’: effect size: combination of both: • e. pilot study … 2 The Standard Deviation (SD) • Variability of the data • How to determine it? • Substantive knowledge.1 The difference of biological interest • This is to be determined scientifically. previous research. • minimum meaningful effect of biological relevance • the larger the effect size. 05) •p-value is the probability that a difference as big as the one observed could be found even if there is no effect. • Don’t throw away a p-value=0.051 ! 4 The desired power of the experiment: 80% 5 The sample size: That’s (often) the all point! 6 The alternative: • One or two-sided test? .3 The significance level • usually 5% (p<0. e. What sample size do I need to have a 80% probability (power) to detect this particular effect (difference and standard deviation) at a 5% significance level using a 2-sided test? Difference Standard deviation Sample size Significance level Power 2-sided test ( ) .• Fix any five of the variables and a mathematical relationship can be used to estimate the sixth.g. • Good news: there are packages that can do the power analysis for you . providing you have some prior knowledge of the key parameters! difference + standard deviation = effect size • Free packages: • G*Power and InVivoStat • Russ Lenth's power and sample-size page: • http://www.edu/~rlenth/Power/ • Cheap package: StatMate (~ £30) • Not so cheap package: MedCalc (~ £275) ..uiowa..divms. Qualitative data • = not numerical • = values taken = usually names (also nominal) • e.g. causes of death in hospital • Values can be numbers but not numerical • e.g.g. group number = numerical label but not unit of measurement • Qualitative variable with intrinsic order in their categories = ordinal • Particular case: qualitative variable with 2 categories: binary or dichotomous • e. alive/dead or male/female . Analysis of qualitative data Example of data (cats and dogs.xlsx): • Cats and dogs trained to line dance • 2 different rewards: food or affection • Is there a difference between the rewards? • Is there a significant relationship between the 2 variables? – are the animals rewarded by food more likely to line dance than the one rewarded by affection? • To answer this question: Food Affection – Contingency table Dance ? ? – Fisher’s exact test No dance ? ? But first: how large do your samples need to be? . 70% after having received food.G*Power A priori Power Analysis Example case: Preliminary results from a pilot study on cats: 25% line-danced after having received affection as a reward vs. Power analysis with G*Power = 4 steps Step1: choice of Test family . G*Power Step 2 : choice of Statistical test Fisher’s exact test or Chi-square for 2x2 tables . G*Power Step 3: Type of power analysis . .G*Power Step 4: Choice of Parameters Tricky bit: need information on the size of the difference and the variability. G*Power Output: If the values from the pilot study are good predictors and if you use a sample of n=23 for each group. you will achieve a power of 81%. . a test or a difference are said to be “significant” if the probability of type I error is: α =< 0. Statistical decision True state of H0 H0 True (no effect) H0 False (effect) Reject H0 Type I error α Correct False Positive True Positive Do not reject H0 Correct Type II error β True Negative False Negative • Traditionally.05 • High specificity = low False Positives = low Type I error • High sensitivity = low False Negatives = low Type II error .The Null hypothesis and the error types • The null hypothesis (H0): H0 = no effect – e.g.: the animals rewarded by food are as likely to line dance as the one rewarded by affection • The aim of a statistical test is to reject or not H0. the observed frequencies for two or more groups are compared with expected frequencies by chance.xlsx . Chi-square test • In a chi-square test. – With observed frequency = collected data • Example with the cats and dogs. 1 Dog Did they Y es Count 23 24 47 dance? Probability approach: % wit hin Did they dance? 48.0 No Count 9 10 19 Expected Count 9.8% 100.9 19.9 + (30-19.1 16.0 Is 28.0% Total Count 32 36 68 Expected frequency=(row total)*(column total)/grand total % wit hin Did they dance? 47.0 34.5% 100.0% No Count 6 30 36 % wit hin Did they dance? 16.9)2 /16.1% 100.0 Total Count 32 36 68 Chi2 = (26-15.4 big enough Total Count 32 34 66 Expected Count 32.9% 100.22 Did they dance? * Type of Training * Animal Crosstabulati on Ty pe of Training 22% of 68 = 15.0% No Count 9 10 19 % wit hin Did they dance? 47.1 + (6-16.9 + Expected Count 32.4% 52.2 9.0 36.1)2/15.4 dance? Expected Count 22.0 Dog Did they Y es Count 23 24 47 (6-16.1% 52.0% Total Count 32 34 66 Probability of line dancing: 32/68 % wit hin Did they dance? 48.8 24.1 36.2 47.6% 100.0 for the test to be significant? .3% 100.1)2/19. Chi-square test (2) Did they dance? * Type of Traini ng * Animal Crosstabul ation Example: expected frequency of cats line Ty pe of Training dancing after having received food as a Animal Food as Reward Af f ect ion as Reward Total reward: Cat Did they Y es Count 26 6 32 dance? Direct counts approach: % wit hin Did they dance? 81.8 19.1 Food as Af f ect ion as Animal Reward Reward Total Cat Did they Y es Count 26 6 32 dance? Expected Count 15.9 32.9% 51.0 66.0% = 32*32/68 = 15.5% 51.1 = 28.9)2/16.3% 18.0 68.0% Probability of receiving food: 32/68 Expected frequency:(32/68)*(32/68)=0.7% 83.0 For the cats: No Count 6 30 36 Expected Count 16. . .0001) whereas dogs don’t mind (p>0. Chi-square test: results Dog Cat 30 D a3n0c e Y e s D ance Y es D ance N o D ance N o 20 20 C o u n ts C o u n ts 10 10 0 0 Food A f f e c t io n Food A f f e c t io n • In our example: cats are more likely to line dance if they are given food as reward than affection (p<0.99). Quantitative data • They take numerical values (units of measurement) • Discrete: obtained by counting – Example: number of students in a class – values vary by finite specific steps • or continuous: obtained by measuring – Example: height of students in a class – any values • They can be described by a series of parameters: – Mean. standard deviation. standard error and confidence interval . variance. 6 friends per person – Clearly an hypothetical value • How can we know that it is an accurate model? – Difference between the real data and the model created . 3 and 4 • Mean: (1+2+3+3+4)/5 = 2. 3. 2. The mean • Definition: average of all values in a column • It can be considered as a model because it summaries the data – Example: a group of 5 persons: number of friends of each members of the group: 1. . The mean (2) • Calculate the magnitude of the differences between each data and the mean: From Field. 2000 – Total error = sum of difference =0 • No errors ! – Positive and negative: they cancel each other out. • Solution: to divide the SS by the number of observations (N) • As we are interested in measuring the error in the sample to estimate the one in the population we divide the SS by N-1 instead of N and we get the variance (S2) = SS/N-1 . Sum of Squared errors (SS) • To avoid the problem of the direction of the error: we square them – Instead of sum of errors: sum of squared errors (SS): • SS gives a good measure of the accuracy of the model • But: dependent upon the amount of data: the more data. the higher the SS. D.Variance and standard deviation • Problem with variance: measure in squared units – For more convenience. = √(SS/N-1) = √(s2) = s • The standard deviation is a measure of how well the mean represents the data . the square root of the variance is taken to obtain a measure in the same unit as the original measure: • the standard deviation – S. : data distant from the mean: mean is a good fit of the data mean is not an accurate representation .D: data close to the mean: Large S.D. Standard deviation Small S. SD and SEM (SEM = SD/√N) • What are they about? – The SD quantifies how much the values vary from one another: scatter or spread • The SD does not change predictably as you acquire more data. – The SEM quantifies how accurately you know the true mean of the population. • Why? Because it takes into account: SD + sample size • The SEM gets smaller as your sample gets larger – Why? Because the mean of a large sample is likely to be closer to the true mean than is the mean of a small sample. . . The SEM quantifies how much we expect sample means to vary. SD and SEM The SD quantifies the scatter of the data. . – Report the SEM to show how well you have determined the mean. – Report the SD rather than the SEM. it is important to show the variation. • If you are using an in vitro system with no biological variability. • Better even: show a graph of all data points. the scatter is about experimental imprecision (no biological meaning). SD or SEM ? • If the scatter is caused by biological variability. Standard error Inferential A measure of how variable the mean will be. Confidence interval • Range of values that we can be 95% confident contains the true mean of the population. if you repeat the whole study many times.96 SEM] (SEM = SD/√N) Error bars Type Description Standard deviation Descriptive Typical or average difference between the data points and their mean.1.96 SEM. . Confidence interval Inferential A range of values you can be 95% usually 95% CI confident contains the true mean. . Mean + 1.So limits of 95% CI: [Mean . g. Mann-Whitney test) and/or for qualitative data (e. Fisher’s exact and χ2 tests).g. Analysis of quantitative data • Choose the correct statistical test to answer your question: – They are 2 types of statistical tests: • Parametric tests with 4 assumptions to be met by the data. • Non-parametric tests with no or few assumptions (e. . bell shape. Gaussian shape • Transformations can be made to make data suitable for parametric analysis .Assumptions of Parametric Data • All parametric tests have 4 basic assumptions that must be met for the test to be accurate. 1) Normally distributed data – Normal shape. Assumptions of Parametric Data (2) • Frequent departure from normality: – Skewness: lack of symmetry of a distribution Skewness < 0 Skewness = 0 Skewness > 0 – Kurtosis: measure of the degree of ‘peakedness’ in the distribution • The two distributions below have the same variance approximately the same skew. More peaked distribution: kurtosis > 0 Flatter distribution: kurtosis < 0 . but differ markedly in kurtosis. Assumptions of Parametric Data (3) 2) Homogeneity in variance • The variance should not change systematically throughout the data 3) Interval data • The distance between points of the scale should be equal at all parts along the scale 4) Independence • Data from different subjects are independent – Values corresponding to one subjects do not influence the values corresponding to another subject. – Important in repeated measures experiments . Analysis of quantitative data • Is there a difference between my groups regarding the variable I am measuring? – e.g.: is there a relationship between the daily intake in calories and an increase in body weight? • Test: Correlation (parametric or non-parametric) .: are the mice in the group A heavier than the one in group B? • Tests with 2 groups: – Parametric: t-test – Non parametric: Mann-Whitney/Wilcoxon rank sum test • Tests with more than 2 groups: – Parametric: Analysis of variance (one-way ANOVA) – Non parametric: Kruskal Wallis • Is there a relationship between my 2 (continuous) variables? – e.g. Remember: Stats are all about understanding and controlling variation. the ratio of signal to noise determines the significance.e. interindividual variation) is large then the same signal will not be detected noise = no statistical significance In a statistical test. . signal If the noise is low then the signal is detectable … = statistical significance noise signal … but if the noise (i. we have to judge the difference between their means relative to the spread or variability of their scores • Ex: comparison of 2 groups control and treatment . Comparison between 2 groups: t-test • Basic idea: – When we are looking at the differences between scores for 2 groups. t-test (2) . t-test (3) . 05 ~ 2 x SE: p~0.0 9.01 10.5 Dependent variable 11.5 10.5 A B A B . SE gap ~ 2 n=3 SE gap ~ 4.0 ~ 1 x SE: p~0.0 10.5 Dependent variable 11.5 9.01 10 12 11 9 10 8 9 A B A B SE gap ~ 2 n>=10 SE gap ~ 1 n>=10 12.5 n=3 13 16 15 Dependent variable Dependent variable 12 14 11 13 ~ 2 x SE: p~0.0 11.05 ~ 4.5 10.0 11.5 x SE: p~0. 05 ~ 0 x CI: p~0.01 10 10 9 9 A B A B .05 ~ 0.01 8 10 6 A B A B CI overlap ~ 0.5 x CI: p~0.5 n=3 14 Dependent variable Dependent variable 12 15 10 ~ 1 x CI: p~0.5 x CI: p~0.5 n>=10 CI overlap ~ 0 n>=10 12 12 Dependent variable Dependent variable 11 11 ~ 0. CI overlap ~ 1 n=3 CI overlap ~ 0. – Paired t-test • it looks at the difference between two variables for a single group: – the second sample is the same as the first after some treatment has been applied – One-Sample t-test • it tests whether the mean of a single variable differs from a specified constant (often 0) . t-test (4) • 3 types: – Independent t-test • it compares means for two independent groups of cases. Example: coyote.xlsx • Question: is there a difference in size between males and females coyotes? • 1 Power Analysis • 2 Plot the data • 3 Check the assumptions for parametric test • 4 Statistical analysis: Independent t-test . You need a sample size of n=76 (2*38) . In a study run in similar conditions as in the one you intend to run.G*Power Independent t-test A priori Power analysis Example case: You don’t have data from a pilot study but you have found some information in the literature.7cm (SD) You expect a 5% difference between genders with a similar variability in the female sample. male coyotes were found to measure: 92cm+/. Coyote 110 Maximum 100 Upper Quartile (Q3) 75th percentile Interquartile Range (IQR) Length (cm) 90 Median Lower Quartile (Q1) 25th percentile 80 Smallest data value Cutoff = Q1 – 1.5*IQR > lower cutoff 70 Outlier 60 Male Female . . Male 8 go for percentages 6 Counts 4 2 0 707274767880828486889092949698100 102 104 106 707274767880828486889092949698100 102 104 106 15 Bin Center Female Male 10 Counts 5 Normality  0 69 72 75 78 81 84 87 90 93 96 99 102105 69 72 75 78 81 84 87 90 93 96 99 102105 Bin Center 15 Female Male 10 Counts 5 0 68 72 76 80 84 88 92 96 100 104 108 68 72 76 80 84 88 92 96 100 104 108 Bin Center . dist. Assumptions for parametric tests Histogram of Coyote:Freq. (histogram) 10 Counts OK here Female but if several groups of different sizes. 100 Independent t-test: example 95 Standard error coyote.xlsx B ody M ass 90 C o y o te s 85 110 108 80 106 F e m a le M a le 104 102 100 Standard deviation 100 98 96 95 94 B ody M ass 92 90 90 L e n g th (c m ) 88 85 86 84 80 82 F e m a le M a le 80 95 78 94 76 93 74 Length (cm) 92 72 91 70 90 68 89 66 88 64 87 62 86 Confidence interval 60 F e m a le M a le 85 Male Female . Independent t-test: results coyote.1045). Homogeneity in variance  What about the power of the analysis? .xlsx Males tend to be longer than females but not significantly so (p=0. Power analysis You would need a sample 3 times bigger to reach the accepted power of 80%.3 cm difference between genders biologically relevant (<3%) ? Another example of t-test: working memory.xlsx . But is a 2. The sample size: the bigger the better? • It takes huge samples to detect tiny differences but tiny samples to detect huge differences. • Remember the important first step of power analysis • What is the effect size of biological interest? . • What if the tiny difference is meaningless? • Beware of overpower • Nothing wrong with the stats: it is all about interpretation of the results of the test. working memory.xlsx Normality  . xlsx No test for homogeneity of variances . Paired t-test: Results working memory. • What is the familywise error rate? – The error rate across tests conducted on the same experimental data. .Comparison of more than 2 means • Why can’t we do several t-tests? – Because it increases the familywise error rate. 857 = 0. the familywise error rate is 40% !!!!! (=1-(0.95 = 0. each with a 5% level of significance • The probability of not making the type I error is 95% (=1 – 0.05) – the overall probability of no type I errors is: 0.95)n) • Solution for multiple comparisons: Analysis of variance . Familywise error rate • Example: if you want to compare 3 groups and you carry out 3 t- tests.143 or 14.3% – The probability has increased from 5% to 14.95 * 0.857 – So the probability of making at least one type I error is 1-0.95 * 0.3% ! – If you compare 5 groups instead of 3. . Analysis of variance • Extension of the 2 groups comparison of a t-test but with a slightly different logic: • t-test = mean1 – mean2 Pooled SD Pooled SD • ANOVA = variance between means Pooled SD Pooled SD • ANOVA compares variances: If variance between the several means > variance within the groups (random error) • then the means must be more spread out than it would have been by chance. Analysis of variance • The statistic for ANOVA is the F ratio. you test whether F is significantly higher than 1 or not. . Variance between the groups • F= Variance within the groups (individual variability) Variation explained by the model (= systematic) • F= Variation explained by unsystematic factors (= random variation) • If the variance amongst sample means is greater than the error/random variance. then F>1 – In an ANOVA. 44 77 Pooled SD=√MS(Residual) • Variance (= SS / N-1) is the mean square – df: degree of freedom with df = N-1 Hypothetical model Between groups variability Within groups variability Total sum of squares .0001 Within Groups 5.0791 In Power Analysis: Total 8. Analysis of variance Source of variation Sum of Squares df Mean Square F p-value Between Groups 2.423 <0.665 4 0.775 73 0.6663 8. xlsx • Question: is there a difference in protein expression between the 5 cell lines? • 1 Plot the data • 2 Check the assumptions for parametric test • 3 Statistical analysis: ANOVA . Example: protein expression. 10 Protein expression 8 6 4 2 0 A B C D E Cell groups 10 8 6 4 2 0 A B C D E Cell groups . Parametric tests assumptions . 5 -1.5 Protein expression (Log) 1.0 -0.5 0.5 Protein expression (Log) 1.5 -1.0 0.0 A B C D E Cell groups .0 A B C D E 1.0 0.5 0. 1.0 -0. Parametric tests assumptions Normality  Analysis of variance: Post hoc tests • The ANOVA is an “omnibus” test: it tells you that there is (or not) a difference between your means but not exactly which means are significantly different from which other ones. – To find out, you need to apply post hoc tests. – These post hoc tests should only be used when the ANOVA finds a significant effect. Analysis of variance 13 Post hoc tests . Analysis of variance Results Homogeneity of variance  F=0.08278=8.6727/0. Correlation • A correlation coefficient is an index number that measures: – The magnitude and the direction of the relation between 2 variables – It is designed to range in value between -1 and +1 . in percentage. Correlation • Most widely-used correlation coefficient: – Pearson product-moment correlation coefficient “r” • The 2 variables do not have to be measured in the same units but they have to be proportional (meaning linearly related) – Coefficient of determination: • r is the correlation between X and Y • r2 is the coefficient of determination: – It gives you the proportion of variance in Y that can be explained by X. . 0 1 . Correlation: example roe deer.5 2 .5 P a r a s it e s b u r d e n .0 3 .xlsx • Is there a relationship between parasite burden and body mass in roe deer? 30 M a le F e m a le 25 B ody M ass 20 15 10 1 .0 2 .5 3 . 0049 vs. females: p=0.Correlation: example roe deer. .2940).slsx There is a negative correlation between parasite load and fitness but this relationship is only significant for the males (p=0. equally spaced on a logarithmic scale. – Y values are responses. Curve fitting • Dose-response curves – Nonlinear regression – Dose-response experiments typically use around 5-10 doses of agonist. • The aim is often to determine the IC50 or the EC50 – IC50 (I=Inhibition): concentration of an agonist that provokes a response half way between the maximal (Top) response and the maximally inhibited (Bottom) response – EC50 (E=Effective): concentration that gives half-maximal response Stimulation: Inhibition: Y=Bottom + (Top-Bottom)/(1+10^((LogEC50-X)*HillSlope)) Y=Bottom + (Top-Bottom)/(1+10^((X-LogIC50))) . Compare different conditions: Diff in parameters Diff between conditions for one or more parameters Constraint vs no constraint Diff between conditions for one or more parameters 3.xlsx 500 N o in h ib ito r 400 In h ib ito r 300 200 100 0 -1 0 -8 -6 -4 -2 -1 0 0 lo g (A g o n is t ]. Curve fitting: example Inhibition data.Choose a Fit: not necessary to normalize should choose it when values defining 0 and 100 are precise variable slope better if plenty of data points (variable slope or 4 parameters) 2.Constrain: depends on your experiment depends if your data don’t define the top or the bottom of the curve 4.Weights: important if you have unequal scatter among replicates . M Step by step analysis and considerations: 1. xlsx 500 N o in h ib ito r 400 In h ib ito r 300 200 100 0 -1 0 -8 -6 -4 -2 -1 0 0 lo g (A g o n is t ].Initial values: defaults usually OK unless the fit looks funny 6. Curve fitting: example Inhibition data.Diagnostics: check for normality (weights) and outliers (but keep them in the analysis) check Replicates test .Output: summary table presents same results in a … summarized way.Range: defaults usually OK unless you are not interested in the x-variable full range (ie time) 7. M Step by step analysis and considerations: 5. 8. 5 -3 .n o r m a liz e d d a ta 3 p a r a m e te r s N o n . 5 -7 .xlsx N o n . 5 -9 . 5 -3 . 0 -3 . 5 -8 . 5 -4 . 5 -7 . 0 -8 . 5 -7 . 0 -3 . 5 -5 . 0 -3 . 5 -8 . 5 -4 . 0 -6 . 0 -7 . 0 -5 . 0 -9 . 5 -6 . 5 -3 . 5 -7 . 0 -1 0 . 0 -9 . 0 -9 . 5 -9 . 0 -5 0 lo g (A g o n is t) -5 0 lo g (A g o n is t) -1 0 0 -1 0 0 N o r m a liz e d d a ta 3 p a r a m e te r s N o r m a liz e d d a ta 4 p a r a m e te r s 110 110 100 100 90 90 80 80 70 70 R e s p o n s e (% ) 60 60 EC50 50 50 N o in h ib ito r 40 40 In h ib ito r 30 30 N o in h ib ito r 20 20 In h ib ito r 10 10 0 0 -1 0 . 0 -5 . 5 -9 . 0 -7 . 0 -6 . 0 -5 . 5 -3 . 0 -3 .n o r m a liz e d d a ta 4 p a r a m e te r s 500 500 450 450 400 400 350 350 300 300 250 R esponse 250 R esponse 200 EC50 200 EC50 150 150 N o in h ib ito r N o in h ib ito r 100 100 In h ib ito r In h ib ito r 50 50 0 0 -9 . 0 -4 . 5 -9 . 0 -5 . 0 -7 . 0 lo g (A g o n is t) lo g (A g o n is t) . 5 -8 . 5 -6 . 0 -8 . 5 -5 . 5 -5 . 0 -4 . 5 -5 . 0 -6 . 0 -7 . 0 -8 . 5 -4 . 5 -4 . Curve fitting: example Inhibition data. 0 -4 . 5 -6 . 0 -8 . 5 -6 . 0 -4 . 5 -8 . 0 -6 . 5 -5 .943 -1 0 . 0 -9 .0036 0. 0 -3 . 0 -3 . 0 lo g (A g o n is t) N o r m a liz e d d a ta 3 p a r a m e te r s 110 100 90 Replicates test for lack of fit 80 SD replicates 5. 5 -7 . 5 -6 .0125 0. 5 -4 .52 300 SD lack of fit 39. 0 -3 . 5 -7 . 5 -5 . 0 -4 .393 1. 5 -9 . 5 -9 . 5 -8 . 0 -9 .1989 0 -9 .011 P value 0.656 1. 5 -5 .100 80 70 SD lack of fit 11. Curve fitting: example Inhibition data. 5 -4 .84 32.0247 0. 5 -4 . 5 -7 . 0 -8 .71 25. 5 -3 .n o r m a liz e d d a ta 3 p a r a m e te r s 500 450 400 Replicates test for lack of fit 350 SD replicates 22.017 0 -9 . 0 -4 . 0 -5 . 0 -8 .159 -6. 0 -5 0 lo g (A g o n is t) -1 0 0 N o r m a liz e d d a ta 4 p a r a m e te r s 110 100 Replicates test for lack of fit 90 SD replicates 5.52 200 EC50 150 N o in h ib ito r SD lack of fit 41.553 1. 0 -4 .847 40 30 N o in h ib ito r P value 0.158 -6. 0 -7 .017 -5.956 -1 0 . 5 -8 . 5 -8 . 0 -7 .22 30.649 50 Discrepancy (F) 4. 5 -6 .982 1.393 50 N o in h ib ito r 40 P value 0. 0 -8 .2478 100 N o in h ib ito r Evidence of inadequate model? Yes No 50 In h ib ito r -7. 0 -5 . 5 -4 . 5 -5 . 5 -9 . 0 -6 .438 200 150 P value 0. 5 -8 .71 25.1246 20 In h ib ito r Evidence of inadequate model? Yes No 10 0 -7. 5 -3 .755 7.n o r m a liz e d d a ta 4 p a r a m e te r s No inhibitor Inhibitor 450 No inhibitor Inhibitor 400 350 300 Replicates test for lack of fit 250 R esponse SD replicates 22. 0 -7 .610 50 -7.28 9. 5 -6 . 0 -3 .xlsx 500 N o n . 0 -6 . 0 -5 . 5 -3 .61 250 R esponse EC50 Discrepancy (F) 2. 5 -7 . 5 -3 .0334 0. 0 -5 . 0 Evidence of inadequate model? Yes No lo g (A g o n is t) -5 0 -1 0 0 N o n . 0 -6 . 5 -9 . 0 -8 . 5 -6 .755 7.379 R e s p o n s e (% ) 60 EC50 Discrepancy (F) 3. 0 -6 .031 -5.38 100 In h ib ito r Discrepancy (F) 3. 0 -4 . 0 -7 .2618 30 In h ib ito r Evidence of inadequate model? Yes No 20 10 0 -7. 0 lo g (A g o n is t) .00 8.100 70 60 SD lack of fit 12. . uk .ac.My email address if you need some help with GraphPad: anne.segonds-pichon@babraham.
Copyright © 2024 DOKUMEN.SITE Inc.