High-Effect Strategy Coach

An offline coaching tool for K-12 instructional coaches. Combines PRISM reasoning, high-effect-size research, and critical-thinking frameworks (FLOATER, Orwell, CRITIC, SIFT). No sign-in, no data sent anywhere — everything stays in your browser.

How to use this tool

Open it in any browser. Nothing is transmitted — all your notes live only in this tab. When you close the tab, they're gone unless you copy them out first.

  1. Clarify the goal. Start in PRISM Workflow. Answer each of the five prompts (Patterns → Reasoning → Ideas → Situation → Methods) as you think through the coaching scenario.
  2. Pick strategies. In Effect Sizes, filter by learning phase (surface / deep / transfer) and minimum effect size. Hattie's "zone of desired effects" starts at d = 0.40.
  3. Evaluate any evidence. Before accepting a claim (vendor pitch, study, blog post), run it through FLOATER, Orwell, CRITIC, or SIFT. Each produces a scored table you can copy into your notes.
  4. Build the plan. In Plan Builder, assemble the recommendation: goal, strategies, effect sizes, SOLO depth, success measure. Copy or print it.

This page is a companion to the High-Effect Strategy Coach custom assistant. It is not the assistant itself — it is a reference and note-taking tool you can use on a laptop in the classroom, during a coaching cycle, or in a PLC meeting.

The four frameworks at a glance

FrameworkBest used forCriteria
PRISMStructured reasoning through a coaching decisionPatterns, Reasoning, Ideas, Situation, Methods
FLOATEREvaluating scientific or empirical claimsFalsifiability, Logic, Objectivity, Alternative explanations, Tentative conclusions, Evidence, Replicability
Orwell TestNews, media, or opinion articlesFacts, Source, Method
CRITICClaims where the claimant's motivation mattersClaim, Role of claimant, Information, Testing, Independent testing, Conclusion
SIFTRapid evaluation of online / social-media contentStop, Investigate source, Find better coverage, Trace claim to original

PRISM workflow for a coaching scenario

Work through each step. Don't skip ahead — the value is in pausing at each one. The stems are prompts; you don't have to use them literally.

P — Patterns (What patterns do you see?)

  • "Here, I noticed that…"
  • "The pattern I see is…"
  • "This reminds me of…"

R — Reasoning (How do things fit together?)

  • "This connects to… because…"
  • "The reason for this is…"
  • "One thing that makes sense is…"

I — Ideas (What different ideas can we mix?)

  • "Another way to think about this is…"
  • "I have a different viewpoint…"
  • "To add to that idea…"

S — Situation (What's the bigger picture?)

  • "The bigger picture shows…"
  • "Beyond what we see here…"
  • "This connects to other things by…"

M — Methods (How can we check our answers?)

  • "We can test this by…"
  • "One way to check this is…"
  • "Another approach would be…"

  

Effect sizes (Visible Learning MetaX, Feb 2025)

Hattie's zone of desired effects starts at d = 0.40. Tagged ≥ 0.80 considerably accelerate 0.40–0.79 accelerate < 0.40 lower yield.

PhaseStrategyEffect size (d)Impact

FLOATER — evaluate a scientific / empirical claim

Use when a colleague, vendor, or article makes a claim about "what works" in education. Score each letter 1 (poor) to 5 (strong). The recommendation updates automatically.

LetterCriterionWhat the source saysScore (1–5)

  

Orwell Test — evaluate news / media / opinion

If any criterion fails, be skeptical. Two or three failures signal likely propaganda.

CriterionQuestionYour assessmentPass?

  

CRITIC — six-step credibility evaluation

Best when the claimant has an incentive or agenda (vendor, think tank, advocacy group).

StepPromptYour notesScore (1–5)

  

SIFT — rapid check on online content

Use for social posts, blog headlines, viral screenshots. Under two minutes per pass.

  1. Stop. Pause before sharing or trusting.
  2. Investigate the source.
  3. Find better coverage.
  4. Trace claim to original context.

  

Plan builder

Assemble the recommendation you'll give to the teacher or take into a coaching cycle.

    
      

    SOLO taxonomy

    1. Prestructural — no connections between ideas; surface-level misunderstanding.
    2. Unistructural — grasps one relevant concept.
    3. Multistructural — identifies multiple concepts but does not connect them.
    4. Relational — integrates concepts into a coherent whole.
    5. Extended Abstract — generalizes, transfers, or creates new applications.

    Choose strategies that push students one level deeper than where they are today, not two.

    Feedback levels (Hattie)

    1. Task — correct or incorrect information about the work itself. High impact on novel tasks.
    2. Process — feedback about the strategies used. Highest average impact on deep learning.
    3. Self-regulation — feedback that builds the student's capacity to monitor their own learning.
    4. Self — praise ("good job"). Low impact on learning; use sparingly.

    Do / Don't

    Do

    • Ground every recommendation in the embedded effect-size table.
    • Pause coaching and evaluate whenever someone cites evidence.
    • Balance surface, deep, and transfer strategies across a unit.
    • Name at least one formative and one summative measure.
    • Match the feedback level to the learning phase.

    Don't

    • Invent effect sizes or source them from memory.
    • Recommend Google Jamboard, Jamboard, Flip, or Flipgrid. Use digital whiteboards or Padlet Video Recording instead.
    • Advance past a PRISM step before the teacher has engaged with it.
    • Accept a vendor or marketing claim without running it through FLOATER, CRITIC, or SIFT first.
    • Pile on multiple strategies at once — pick one or two and measure impact.

    Common biases and fallacies to flag

    • Confirmation bias — accepting evidence that supports what you already believe, dismissing what doesn't.
    • Anecdote as evidence — one classroom story is not a pattern.
    • Correlation presented as causation — two things moving together does not mean one caused the other.
    • Appeal to authority — "a famous educator said so" is not evidence.
    • Cherry-picking — quoting only the results that support the claim.
    • Survivorship bias — studying only the schools or students who succeeded and ignoring the ones who didn't.
    • Publication bias — positive results get published; null results often don't.
    • Novelty effect — any new thing can produce a short-term bump; watch for whether it persists.

    About this tool

    Self-contained HTML — no external code, no network calls, no data storage. Your notes exist only in this browser tab. To save, copy the generated markdown and paste it wherever you keep coaching records.

    Companion to the High-Effect Strategy Coach custom assistant (see prompt.md and README.md in this folder). The underlying research, frameworks, and effect sizes are drawn from Visible Learning MetaX (Feb 2025), Melanie Trecek-King (FLOATER), Wayne Bartz (CRITIC), Mike Caulfield (SIFT), and the PRISM framework.