Generate 50 data points from a normal distribution with a mean of 0 and SD of 1.
a <- NULL
Generate another variable (b
) that is equal to the sum of a
and another 50 data points from a normal distribution with a mean of 0.5 and SD of 1.
b <- NULL
Run a two-tailed, paired-samples t-test comparing a
and b
.
t <- NULL
Use at least 1e4 replications to estimate power accurately.
Calculate power for a two-tailed t-test with an alpha (sig.level) of .05 for detecting a difference between two independent samples of 50 with an effect size of 0.3.
Hint: You can use the sim_t_ind
function from the T-Test Class Notes.
sim_t_ind <- NULL
power.sim <- NULL
Compare this to the result of power.t.test
for the same design.
power.analytic <- NULL
Modify the sim_t_ind
function to handle different sample sizes. Use it to calculate the power of the following design:
sim_t_ind <- NULL
power6 <- NULL
Do noisy environments slow down reaction times for a dot-probe task? Calculate power for a one-tailed t-test with an alpha (sig.level) of .005 for detecting a difference of at least 50ms, where participants in the quiet condition have a mean reaction time of 800ms. Assume both groups have 80 participants and both SD = 100.
power7 <- NULL
The [poisson distribution(https://en.wikipedia.org/wiki/Poisson_distribution) is good for modeling the rate of something, like the number of texts you receive per day. Then you can test hypotheses like you receive more texts on weekends than weekdays. The poisson distribution gets more like a normal distribution when the rate gets higher, so it’s most useful for low-rate events.
Use ggplot
to create a histogram of 1000 random numbers from a poisson distribution with a lambda
of 4. Values can only be integers, so set an appropriate binwidth.
ggplot()
Demonstrate to yourself that the binomial distribution looks like the normal distribution when the number of trials is greater than 10.
Hint: You can calculate the equivalent mean for the normal distribution as the number of trials times the probability of success (binomial_mean <- trials * prob
) and the equivalent SD as the square root of the mean times one minus probability of success (binomial_sd <- sqrt(binomial_mean * (1 - prob))
).
Remember, there are many, many ways to do things in R. The important thing is to create your functions step-by-step, checking the accuracy at each step.
Write a function to create a pair of variables of any size with any specified correlation. Have the function return a tibble with columns X1
and X2
. Make sure all of the arguments have a default value.
Hint: modify the code from the Bivariate Normal section from the class notes.
bvn2 <- function() {}
Use the function for create a table of 10 pairs of values with the default values for other parameters.
dat10 <- NULL
Use faux::rnorm_multi()
to make the same table as above.
dat_faux <- NULL
Use cor.test
to test the Pearson’s product-moment correlation between two variables generated with either your function, having an n
of 50 and a rho
of 0.45.
my_cor <- NULL
Test your function from Question 10 by calculating the correlation between your two variables many times for a range of values and plotting the results. Hint: the purrr::map()
functions might be useful here.
# set up all values you want to test
sims_bvn2 <- NULL
ggplot()
Compare that to the same test and plot for rnorm_multi()
.
# set up all values you want to test
sims_faux <- NULL
ggplot()
Make a new function that calculates power for cor.test
through simulation.
power.cor.test <- function(){}
power_cor <- NULL