Hybrid experimental designs are just what the name implies -- new strains that are formed by combining features of more established designs. There are lots of variations that could be constructed from standard design features. Here, I'm going to introduce two hybrid designs. I'm featuring these because they illustrate especially well how a design can be constructed to address specific threats to internal validity.
The Solomon Four-Group Design
The Solomon Four-Group Design is designed to deal with a potential testing threat. Recall that a testing threat occurs when the act of taking a test affects how people score on a retest or posttest. The design notation is shown in the figure. It's probably not a big surprise that this design has four groups. Note that two of the groups receive the treatment and two do not. Further, two of the groups receive a pretest and two do not. One way to view this is as a 2x2 (Treatment Group X Measurement Group) factorial design. Within each treatment condition we have a group that is pretested and one that is not. By explicitly including testing as a factor in the design, we are able to assess experimentally whether a testing threat is operating.
Possible Outcomes. Let's look at a couple of possible outcomes from this design. The first outcome graph shows what the data might look like if there is a treatment or program effect and there is no testing threat. You need to be careful in interpreting this graph to note that there are six dots -- one to represent the average for each O in the design notation. To help you visually see the connection between the pretest and posttest average for the same group, a line is used to connect the dots. The two dots that are not connected by a line represent the two post-only groups. Look first at the two pretest means. They are close to each because the groups were randomly assigned. On the posttest, both treatment groups outscored both controls. Now, look at the posttest values. There appears to be no difference between the treatment groups, even though one got a pretest and the other did not. Similarly, the two control groups scored about the same on the posttest. Thus, the pretest did not appear to affect the outcome. But both treatment groups clearly outscored both controls. There is a main effect for the treatment.
Now, look at a result where there is evidence of a testing threat. In this outcome, the pretests are again equivalent (because the groups were randomly assigned). Each treatment group outscored it's comparable control group. The pre-post treatment outscored the pre-post control. And, the post-only treatment outscored the post-only control. These results indicate that there is a treatment effect. But here, both groups that had the pretest outscored their comparable non-pretest group. That's evidence for a testing threat.
Switching Replications Design
The Switching Replications design is one of the strongest of the experimental designs. And, when the circumstances are right for this design, it addresses one of the major problems in experimental designs -- the need to deny the program to some participants through random assignment. The design notation indicates that this is a two group design with three waves of measurement. You might think of this as two pre-post treatment-control designs grafted together. That is, the implementation of the treatment is repeated or replicated. And in the repetition of the treatment, the two groups switch roles -- the original control group becomes the treatment group in phase 2 while the original treatment acts as the control. By the end of the study all participants have received the treatment.
The switching replications design is most feasible in organizational contexts where programs are repeated at regular intervals. For instance, it works especially well in schools that are on a semester system. All students are pretested at the beginning of the school year. During the first semester, Group 1 receives the treatment and during the second semester Group 2 gets it. The design also enhances organizational efficiency in resource allocation. Schools only need to allocate enough resources to give the program to half of the students at a time.
Possible Outcomes. Let's look at two possible outcomes. In the first example, we see that when the program is given to the first group, the recipients do better than the controls. In the second phase, when the program is given to the original controls, they "catch up" to the original program group. Thus, we have a converge, diverge, reconverge outcome pattern. We might expect a result like this when the program covers specific content that the students master in the short term and where we don't expect that they will continue getting better as a result.
Now, look at the other example result. During the first phase we see the same result as before -- the program group improves while the control does not. And, as before, during the second phase we see the original control group, now the program group, improve as much as did the first program group. But now, during phase two, the original program group continues to increase even though the program is no longer being given them. Why would this happen? It could happen in circumstances where the program has continuing and longer term effects. For instance, if the program focused on learning skills, students might continue to improve even after the formal program period because they continue to apply the skills and improve in them.
I said at the outset that both the Solomon Four-Group and the Switching Replications designs addressed specific threats to internal validity. It's obvious that the Solomon design addressed a testing threat. But what does the switching replications design address? Remember that in randomized experiments, especially when the groups are aware of each other, there is the potential for social threats -- compensatory rivalry, compensatory equalization and resentful demoralization are all likely to be present in educational contexts where programs are given to some students and not to others. The switching replications design helps mitigate these threats because it assures that everyone will eventually get the program. And, it allocates who gets the program first in the fairest possible manner, through the lottery of random assignment.
Copyright ©2006, William M.K. Trochim, All Rights Reserved
Purchase a printed copy of the Research Methods Knowledge Base
Last Revised: 10/20/2006