At EdReports, our commitment to educational excellence has been constant, even as our review tools have evolved since our first reports in 2015. We recognize that educational research and practice are dynamic, and that our tools must reflect the most current understanding of effective learning.
Over the course of the decade since we published our first reports, we have made a range of enhancements to our review tools and processes driven by emerging research findings and feedback from our users and stakeholders, including educational researchers, educator reviewers, classroom educators, and district and state leaders.
What You Need to Know About Our Review Tools
- Consistent Quality: While our tools have evolved over time, the rigor and transparency of our educator-led reviews remains unwavering.
- Continuous Improvement: For every version of our review tools, we have always aimed to ground them in the field's best understanding of established research at the time the tools were developed.
- Transparency: We aim to provide clear guidance on how to interpret and use reports from different tool versions.
Understanding Earlier Reports
Reports created with earlier versions of our review tools (v1.0 and v1.5) contain valuable insights, but may not fully capture the most recent educational priorities and research. Users should:
- Carefully evaluate specific elements in earlier tools
- Refer to v2.0 review tools for the most current best practices
- Compare earlier tools to current ones to understand potential gaps in earlier tools.
In January 2025, we added clearer guidance to all reports to increase users’ awareness of the tool version used for each report and how our review tools have evolved. A green, "meets expectations" rating using an earlier tool reflects alignment to review criteria at the time of the review, but may not encompass elements emphasized in our current tools. For instance, a green rating for a K–5 English language arts (ELA) report created using earlier tools may not mean that the materials fully address all science of reading requirements mandated by current state legislation.
How to Utilize Earlier Reports by Content Area
While not exhaustive, the following information highlights some of the most important factors to consider when using earlier reports for each content area:
English Language Arts
Knowledge Building
- In our current tools: Knowledge building is integrated throughout the indicators in Gateway 2: Comprehension Through Texts, Questions, and Tasks, in alignment with literacy research.
- In earlier tools: Knowledge building was evaluated across an entire dedicated gateway: Building Knowledge with Texts, Vocabulary, and Tasks, but separately from text quality and complexity and from standards alignment with tasks and questions.
Instructional Pathways and Program "Bloat”
- In our current tools: To ensure teachers understand the core instructional pathway of a program, our current tools look for materials to include a clear core instructional pathway and to provide detailed explanations on when and how to use any supplemental materials.
- In earlier tools: Version 1.5 tools asked if a program contained more content than can be feasibly taught in a single school year, but did not explicitly look for guidance on distinguishing between a clear core pathway and supplemental materials.
Structured Literacy Practices and Standards (K–5)
- In our current tools: While EdReports has always prioritized alignment to standards, we recognize that ELA foundational skills standards are a unique case. Feedback from the field, including standards authors, highlights that current standards don’t fully reflect the research and evidence-based practice behind how children learn to read. We agree, and our 2.0 ELA tools place stronger emphasis on alignment to structured literacy practices as a result.
- In earlier tools: Earlier core ELA tools prioritized standards alignment in foundational skills instruction but did not look for structured literacy practices with the same depth or breadth as version 2.0 review tools. Version 1.0 tools for foundational skills supplements were more closely aligned to research-based practices than v1.0 core ELA tools, but not as tightly aligned as version 2.0 tools.
Phonics and Three-Cueing (K–5)
- In our current tools: Indicators look for materials to “emphasize explicit, systematic instruction of research-based and/or evidence-based phonics,” including a dedicated indicator scoring materials based on the absence of three-cueing.
- In earlier tools: Earlier tools looked for systematic, and research-based explicit phonics instruction. Version 1.0 core ELA tools did not reference three-cueing explicitly. Version 1.5 core ELA tools and version 1.0 foundational skills supplement tools asked reviewers if materials relied on three-cueing and if its presence was distracting in materials.
For more detailed information, see our ELA Review Tools page.
Math
Rigor and Balance
- In our current tools: All K–12 tools call explicitly for materials to develop each aspect of rigor and to provide student opportunities to demonstrate their development in each aspect. This increased clarity and consistency ensures that materials are evaluated with precision and consistency across grade levels. The indicator relating to balance across the three aspects uses binary scoring, awarding either zero or two points based on how well materials deliver on treating the three aspects both independently and together.
- In earlier tools: Earlier tools looked for intentional development of all three aspects of rigor and balance in treating them independently and together, but with less precision and consistency across grade levels than version 2.0 tools.
The Standards for Mathematical Practice
- In our current tools: Each of the eight Math Practices is the focus of a separate, binary score of zero or one point, based on how well materials support the development of that Practice. This ensures clarity and usability of reports, providing targeted insights into how materials engage students with each Math Practice.
- In earlier tools: Earlier tools grouped some of the Math Practices together under shared indicators, with some variations between K–8 and high school tools. Each Math Practices indicator in earlier tools allowed for three-level scoring of either zero, one, or two points—somewhat less precise than our current tools.
For more detailed information, see our Math Review Tools page.
Science
Phenomena and Problems
- In our current tools: Increased clarity of indicator language related to phenomena and problems driving learning, including a scored indicator to evaluate how well materials are designed to include both phenomena and problems.
- In earlier tools: The indicator relating to materials being designed to include both phenomena and problems was unscored in earlier tools, providing narrative evidence only. The intent of the indicators has remained largely consistent across tool versions. In some instances, the language of earlier indicators was less clear and specific in conveying this intent.
Three-Dimensional Learning and Assessment
- In our current tools: Increased clarity of indicator language and consistency across grade bands around how well materials are designed for three-dimensional learning and assessment.
- In earlier tools: Earlier indicators considered the same factors as current tools, but less consistently across grade bands. For example, in version 1.5 tools, the following indicators were present in high school tools but not in K–8 tools:
- Materials clearly represent three-dimensional learning objectives within the learning sequences.
- Materials are designed to incorporate three-dimensional performance tasks
For more detailed information, see our Science Review Tools page.
All Content Areas
Multilingual Learner (MLL) Supports
- In our current tools: Introduction of dedicated, MLL-specific tools for each K–12 content area to highlight where and how multilingual students can be successful within the materials. Each tool covers MLL students’ full and complete participation in grade-level contents, coherence of MLL supports, teacher guidance, and assessment. For more detailed information, see our MLL Review Tools page.
- In earlier tools: In v1.5 tools, MLL supports were evaluated across two indicators in the final review gateway (3q and 3s), but not with the same breadth or depth as our current MLL tools. In v1.0 tools, there were some variations across content areas and grade bands. Some tools used a similar approach to v1.5 while others did not directly address MLL supports.
Assessments
- In our current tools: The content and scope of assessments is evaluated in Gateway 1 or 2 in all tools. This ensures a larger percentage of programs receive ratings for this aspect of their assessments.
- In earlier tools: The content and scope of assessments was evaluated in the final review gateway in most earlier tools. In these cases, only programs that achieved passing scores in Gateways 1 and 2 were evaluated for this aspect of their assessments.