Frequently Asked Questions (FAQs)
- What is the purpose of the website?
ProvenTutoring is designed to provide comparable information on tutoring programs provided during the school day that are proven effective in rigorous studies. Each program is described in a similar way in a common template to make it easy to compare program features and evidence. The website links to program websites for users who want more detailed information
- Who developed and manages this website?
Staff at Johns Hopkins University’s Center for Research and Reform in Education developed and manage this site. Information and video content are supplied by program providers.
- How can I find out about programs that have been successful with students or schools like mine?
Users can select subjects and grade levels to explore. Each program page shows populations involved in validating research.
- Why do I not see a program I know about in this website?
This website only focuses on programs proven effective in research. It also is restricted to tutoring programs provided during the school day that do not require hiring certified teachers as tutors (impractical in today’s teacher shortage). For a broader set of all tutoring and non-tutoring programs, go to www.evidenceforessa.org and enter the name of any program you seek in the search bar on the Home Page.
- How can schools choose among proven tutoring programs?
We would suggest that school leaders weigh several factors in deciding which proven tutoring programs to select:
a. Effect size: Go for the big numbers. If you are comparing equally rigorous evaluations, effect size is a meaningful indicator of the amount of gain students are likely to make in comparison to an untreated control group, assuming implementation is of high quality. All of the studies validating programs in ProvenTutoring.Org used similar, rigorous methods, so effect sizes are a good indicator of impact.
b. Group Size: Small group interventions (2-5 students) allow more students to be served and can still be effective. One-to-one interventions are much more intensive and serve fewer students. Consider your student needs. If two programs have similar effect sizes but one can teach twice or four times the number of students, you may wish to choose the one that extends the benefits of tutoring to more students.
c. Where the program was evaluated: Common sense should tell you that a rural school should prefer a program evaluated in rural schools, and an urban school should prefer evaluations in urban settings. If there are no evaluations in schools like yours, you might ask program leaders for examples of use of the program in schools like yours, perhaps even nearby.
d. Visit a program’s website, view videos of the program in action, or if possible, visit a school using a program nearby. Ask questions of program staff and current users. But check the program data first, to avoid selecting a program that looks great, but has never been found to make much of a difference.
- How many students can a program serve at a time?
Several variables related to the tutoring models and the tutors are important to know when estimating an answer to this question.
- Group Size: How many students will be served per session?
- Sessions: How many sessions per day can each tutor provide?
- Tutors: How many tutors are there?
Multiply these variables to determine how many students a program can serve at a time:
(Group Size) X (Sessions Per Day) X (Number of Tutors) = Students Served at a Time
If you want to estimate how many students will be served per year, consider whether students will be served for a semester, quarter, or year. Multiply the number of students served at a time by the number of time periods in a school year.
- What requirements do programs have to meet to be identified as providers that are part of the ProvenTutoring coalition?
To join ProvenTutoring, replicable models and their providers must meet the following requirements:
- Eligible tutoring programs must be focused on increasing the achievement of students who are who are performing below grade level or at risk of performing below grade level in reading or mathematics in grades K-12.
- Tutoring programs eligible will include regular support by a human tutor, provided in a group of 4 or fewer students on a regular high-frequency schedule during the school day using a structured process.
- Tutoring programs must have been evaluated in studies that meet the evidence standards of the Every Student Succeeds Act (ESSA) at the Strong, Moderate, or Promising levels. These are described in another FAQ.
- Tutoring providers must demonstrate the following:
- Commitment to and capacity for delivering programs at scale;
- Commitment to implementing a program as it was evaluated (i.e., with the same level of training, coaching, and monitoring for tutors); and
- Commitment to joining with their peers to promote the use of proven programs, and to maintain high standards of implementation to maximize achievement outcomes of their programs.
- What were the criteria used to determine that tutoring programs were “proven”?
In order to be considered “proven,” programs had to have at least one study that met the EvidenceforESSA.org Strong, Moderate, or Promising standards of evidence, and had a significant and substantial positive outcome on achievement outcomes (an effect size of at least +0.10). Specific standards were as follows:
a. Acceptable studies compare students who received tutoring during the school day to a similar control group of students who did not receive tutoring. Assignment to the tutoring or control conditions could be randomized or matched. Quasi-experimental studies must define the experimental group as all students who received any treatment and must demonstrate comparable baseline characteristics including pretest equivalence and similar demographics. Retrospective QEDs need to meet additional criteria to address potential selection bias.
b. Students had to be pre- and post-tested on quantitative measures. Pretest measures for the final (analytic) sample had to be similar in tutoring and control groups (within 0.25 standard deviation units). Differential attrition had to be no more than 15%.
c. Studies had to be at least 12 weeks in duration.
d. There had to be at least 30 students in each group.
e. Outcome measures could not be ones created by developers or researchers. Tutors could not test their own students on research measures.
