A webR tutorial
Sample Size Exploration for Interaction Effects
Introduction
In regression models, interactions let us explore whether the effect of one predictor (such as x) on an outcome (y) varies depending on another predictor (z). A significant interaction effect (which we’ll define in this activity as a 95% CI for the interaction term that excludes zero) suggests that z modifies the influence of x on y.
In this activity, we’ll investigate two main questions:
1. The Effect of Sample Size
How does sample size impact interaction detection? With small samples, subtle interactions are harder to detect confidently. Larger samples provide more precise estimates, increasing the likelihood that the CI excludes zero, signaling a meaningful interaction.
2. The Effect Size of the Interaction
How does the interaction effect size impact the detection of moderation? The interaction effect size reflects how strongly z influences the relationship between x and y.
Here’s an intuitive breakdown:
Small Interaction Effect: Changes in z minimally affect the relationship between x and y, meaning x has a fairly consistent impact on y across levels of z.
Large Interaction Effect: z substantially moderates the effect of x on y. For example, x might have a much stronger impact on y when z is high compared to when z is low, leading to noticeable differences in slope.
In practical terms, effect size shows us how much z matters in altering x’s impact on y. Large effect sizes reveal strong moderation by z, while small effect sizes suggest a weaker or more stable relationship across levels of z.
Set up the function
We’ll use a function to facilitate our exploration. Press Run Code on the code chunk below to set up for the function. You don’t need to worry about how the function works.
Instructions for your exploration
Now, we’re ready to make use of the function to study sample size and interaction effect size. In the context of this simulation, we will consider small, medium, and large effect sizes. Given the simulation parameters:
Small Effect: A small interaction effect means that the relationship between x and y is only slightly modified by z. For this example, if the interaction coefficient (i.e., the estimate for x:z) is around 0.1 to 0.2, we can consider this a small effect.
Medium Effect: A medium interaction effect indicates that z noticeably alters the effect of x on y, but this change is still moderate. For this example, an interaction coefficient around 0.3 to 0.5 can be considered a medium-sized effect.
Large Effect: A large interaction effect means that z substantially modifies the relationship between x and y. For this example, an interaction coefficient above 0.5 can be considered large.
You’ll use the explore_interaction_effect
function (which is set up for you in the code chunk below) to understand how sample size and effect size influence the detection of an interaction effect. The function runs 25 simulations, each calculating a 95% confidence interval (CI) for the interaction term. This simulates the process of drawing 25 samples of a certain sample size and with a certain effect size for the interaction between the key predictor (x) and the moderator (z). By studying these results, you’ll gain insights into the impact of sample size and effect size on whether the CI for an interaction term contains zero, which would indicate an unclear or non-significant interaction.
Steps to Follow
Set Up Your Exploration
Start by choosing a small sample size (e.g., 30) and a small interaction effect size (e.g., 0.1). These values can be entered directly into the function call below.
Run the function by pressing Run Code on the code chunk.
Examine the Results
After each run, study the plot output:
Confidence Intervals: Each point on the plot represents the interaction effect estimate from one of 25 simulations, with vertical lines showing the CI around each estimate.
Colors: The CI lines are green when they do not contain zero, indicating a likely interaction effect. Gray indicates that the CI contains zero, suggesting a less certain interaction effect.
Note the proportion of CIs containing zero in the plot’s subtitle.
Experiment with Larger Sample Sizes
Gradually increase the
sample_size
argument (e.g., try 50, 100, 200, 500, 1000, 2000) while keeping the interaction effect size constant.Observe how increasing sample size affects the proportion of CIs containing zero. Larger samples provide more precise estimates, so you should notice fewer gray (non-significant) CIs as the sample size increases. There is not a seed set — so each time you click Run Code you’ll see a slightly different result even when you haven’t changed sample or effect size.
Explore with Larger Effect Sizes
Reset the
sample_size
to a moderate value (e.g., 100) and set theinteraction_effect_size
to something small (e.g., .1).Gradually increase the
interaction_effect_size
argument (e.g., 0.2, 0.4, 0.6, 0.8), keeping the sample size constant.Observe how the larger effect size impacts the CIs across simulations. With a larger interaction effect size, you should see fewer CIs containing zero, even with a moderate sample size.
Summarize Your Findings
Sample Size: How does increasing the sample size influence the proportion of CIs containing zero?
Effect Size: How does increasing the interaction effect size impact the results, especially with smaller sample sizes?
Reflect on why small samples and small effect sizes make it harder to detect interaction effects.
Through these steps, you’ll gain a better understanding of how statistical power and effect size impact the reliability of interaction term estimates. Enjoy exploring, and take note of patterns in the results!