Anyone else struggle when a program looks great on paper but completely falls apart in session?
I’ve had a few programs recently that made complete sense during planning but just didn’t work once we were in session. Data collection looked clean, procedures were clear, and everyone agreed on the goal. Then reality hit, the learner wasn’t responding the way we expected at all. It made me question whether the issue was the program design or my assumptions going in. Curious how often others run into this. How do you decide when to tweak versus scrap a program entirely?
I think time might be the best indicator. I’ve had a couple of new programs that fell flat the first set of trials. Zero responding. This was particularly true for me implementing communication skills programs like answering Yes/No questions and responding to name, despite various prompts. After a few sessions of continuous modeling we started seeing progress.