Intro Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today we're going to learn how to use one-way ANOVA for hypothesis testing and the Bonferroni test. Here's our problem statement: The accompanying data are the weights (in kilograms) of poplar trees that were obtained from trees planted in a rich and moist region. The trees were given different treatments identified in the accompanying table. Also shown are partial results from using the Bonferroni test with the sample data. Complete Parts A through C. Part A Part A says, “Use a 0.10 significance level to test the claim that the different treatments result in the same mean weight.” The first thing we're asked to do is determine the null and alternative hypotheses for one-way ANOVA. The null and alternative hypotheses are going to be set; they're gonna be pretty much the same regardless of what it is you're actually testing. The null hypothesis, because it's a statement of equality by definition, will simply mean that all of the parameters that you're looking at are equal to one another. That being the case, we're going to select the option where all of our parameters are equal to each other. The alternative hypothesis will then be that at least one of these parameters is different from the other. I check my answer. Excellent! Now we're asked to find the test statistic. To do this, I'm going to load the data into StatCrunch so that StatCrunch can do the heavy lifting for me. So here I'm opening my data in a separate window in StatCrunch. I'm going to resize this window so we can see everything a little bit better. And now inside StatCrunch, I'm going to go to Stat –> ANOVA –> One Way. Here in the options window I'm going to go ahead and select all of the different columns so we get everything in. And once I have my columns selected I don't need to make any other adjustments here in the options window. So I'm going to go ahead and press Compute!, and here's my results window. And right here at the bottom of my results window, we see the ANOVA table that we need. Our test statistic we're going to take right from the ANOVA table. We're asked to round to two decimal places. Excellent! Our P-value we will also obtain from the ANOVA table. So here I’m asked to round to three decimal places. Fantastic! Now we're asked to conclude our hypothesis test. Remember earlier in the problem statement we were asked to use a 10% significance level to test our claim. Our P-value is definitely less than 10%. Therefore, we’re inside the rejection region, so we're going to reject the null hypothesis. And whenever we reject a null hypothesis, there's always going to be sufficient evidence. Well done! Part B Now Part B asks us, “What do the displayed Bonferroni results tell us?” Well, we have to go back and look at the actual results. So I click on the icon. Here's the Bonferroni results in this table down here. Notice we're comparing three different pairs. So we have the first treatment with the second, the first treatment with the third, and the first treatment with the fourth. To make the actual comparisons, we're going to be using these significance numbers here on the end of the table. These are actually P-values, so we're going to treat these the same way as we would with any other hypothesis test. If the P-value is greater than our significance level, that means there's not a significant difference between the two treatment groups. Remember that if the P-value is less than or equal to the significance level, then that means we reject the null hypothesis. Rejecting the null hypothesis means that we're rejecting the statement that everything's equal to each other, that there is actual some difference. So in order for there to be some significant difference, we have to reject the null hypothesis, which means we have to be within that region of rejection. And that means having a significance or P-value here that's less than or equal to our significance value. Well, what significance value do we have for testing our claim? It's 10%. So this first pairing where we have a p-value of 1, that's definitely greater than 10%, so there's not going to be a significant difference between these two treatment groups. Same thing for the second pairing; 0.901 is greater than 10%, so there's not anything there. But here this last one — 0.033 — that's going to be less than 10%. Therefore there is a significant difference when we're looking at these last two pairings here. So I'm going to go ahead and update my answer fields and the drop-down menus with those conclusions. I check my answer. Good job! Part C Part C asks us, “Use the Bonferroni test procedure with a 0.10 significance level to test the significant difference between the mean amount of the irrigation treatment group and the group treated with both fertilizer and irrigation. Identify the test statistic and the P-value. What do the results indicate?”
OK, the first thing we're asked to look for is the test statistic. This is different from the test statistic we see here in our ANOVA table, so do not put this in. This is not the test statistic they're looking for, for Bonferroni. You have to make some adjustments, and the adjustment that we made is by using a formula that is reliant upon the T distribution. So here we have here on the screen — I made slide here in PowerPoint. This is the formula that we need to be using now. The numbers for this formula, they are going to come from the ANOVA table that we have previously seen. So here we have x-bar1, x-bar2 — these are going to come from our ANOVA table, which if you notice here we're looking at the mean amount of the irrigation treatment group and the group treated with both fertilizer and irrigation. So that's these two columns here. And notice we have mean values for those computed here so 0.418 for the irrigation group and 1.666 for the fertilizer and irrigation group. Then we're asked to find the mean square of the errors here in our denominator. And that's going to come from our ANOVA table, which is actually this number right here. Here's the mean square column, here's the error row, so the mean square of the errors is this number here — 0.17508. Our sample sizes n1 and n2 we can also get from the ANOVA table. Notice we have a sample size column here for column statistics. And we can just grab those numbers here. So when we do that, out comes the numbers that we put in. Notice how I'm putting in that second grouping first, and the reason being is because they're actually looking for a positive number here. And they don't actually state it, but they're looking for the positive number here, reason being that the Bonferroni test, remember, is a two-tailed type of test. And so there's a positive test statistic and a negative test statistic. And since they're only looking for one number here, the default convention is just to give them the positive number. I wish they'd be more explicit and say that, but seeing as they haven’t, I'm here to help guide you through that. So we're actually looking for the positive number. And that's why I'm putting in the greater of the two mean values here first so that we can actually get a positive number to come out. We punch this out on the calculator. Here's what we get. We're asked around the two decimal places, so we're looking for in this case 4.72. Fantastic! Now to find the P-value, I have to go and use actual technology. And the technology of course that I'm going to use is StatCrunch. We have to go back to our actual T distribution to calculate that since this is a T distribution. I'm going to go to Stat –> Calculators –> T. And here my degrees of freedom — what are my degrees of freedom? Well, technically the degrees of freedom will be the total number of samples in the whole set minus the number of pairings that we have. But I find it simpler just to use the ANOVA table. I clear this out of the way so we can see our ANOVA table here. So here it's clear that we've got five samples in each of four columns, so there's 20 samples total, 4 columns — 20 minus 4 is 16. But if you look down here on the error row for degrees of freedom in your ANOVA table, you see that same number 16. And that's why I like to just use the ANOVA table, because it’s a little less math that I have to do. It's just quicker just to grab this number and go with it. So we want 16 degrees of freedom for our T distribution. So I put that in here. And then for my test statistic I'm gonna actually put that in here for what I calculated previously. Notice here I'm actually — because this is actually less than (or I could make it greater than it actually works too) — so here I've got one tail, but since it's two-tailed I've got the same area on the other side of my distribution. So I really need to multiply this number by 2 to get the P-value that I need to put in my answer field. Well, if I take 0.0001 and multiply by 2, I'm going to get 0.0002. Rounded to three decimal places, that's just zero. Well done! And now I compare my P-value, which technically I should adjust the P-value before making the comparison with my significance level. I need to multiply this P-value that I just entered into my answer field here by the total number of pairings possible. There's four different groupings, which means there's six different ways to pair these groupings up. I can pair one with two, one with three, one with four, two with three, two with four and three with four; that's six possibilities. So I should multiply this by six before comparing with my significance level. However, since my P-value is already zero, zero times anything is zero. So we just compare zero with 10%. Of course we're gonna reject with a P-value of zero inside the region of rejection. Therefore, I'm going to reject H0. And whenever we reject H0, there's always sufficient evidence. Nice work! And that's how we do it at Aspire Mountain Academy. Be sure to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is boring or just doesn't want to help you learn stats, go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you'd like to see. Thanks for watching! We'll see you in the next video.
1 Comment
Megan Maroky
11/11/2021 09:49:08 pm
Yay thank you! Loved your vibes, God bless!
Reply
Leave a Reply. |
AuthorFrustrated with a particular MyStatLab/MyMathLab homework problem? No worries! I'm Professor Curtis, and I'm here to help. Archives
July 2020
|
Stats
|
Company |
|