By: Tony DePrato | Follow me on Twitter @tdeprato
A few weeks ago I was on a campus, but not my campus. I was speaking with some technology teachers. They would prefer to be called tech-integrators. After a short and very succinct speech about their beliefs in the technology integration model, I ask them two questions. In both cases, the answers were not what they should have been.
Question 1: Is the integration scheduled, or do you wait for teachers to come to you?
The answer was a very common one, teachers come to us. This model has some very defensible merits. The driving force is that a few technology integrators can focus on class projects, over longer periods of time, and use their own initiative to improve technology in the classroom.
This main issue with this model is learning accountability. The is no accountability for what students need, and no metric stating what students need.
For example, the IB Design Technology SL programme recommends 150 total teaching hours. This indicates that a group of people looked at the entire course experience and the desired outcomes can concluded that students need 150 hours.
A technology integration model needs the same discussion and it needs some metrics. Since technology integration is not a new concept, determining how many hours students need to be engaging with a differentiated curriculum in a “knowable thing”.
Without determining the metrics, how can anyone conclude an on-demand model is the best way to proceed?
The school should have a core set of technology related standards for teachers and students. The technology integration program needs to be able to add metrics to these standards, set standards for contact time, and track how all these requirements are being met. This is normal accountability.
Aside from accountability within the integration technology curriculum, another major issue with the on-demand planning is that people tend to work in their comfort zones. People tend to develop patterns. Because of this, they will often want to engage students in a routine and predictable way. Routine and predictable does not equal learning. Trial and error equal learning.
Breaking routine and creating opportunity takes a plan. A plan in a school needs some type of schedule so that people can jump in and join. Just like any class (in school and out of school) students need to find a time to explore, and finding time means knowing the options.
Question 2: How are you mapping or tracking what you are doing?
The answer was not straightforward. I do believe, if under the scope of certification, there would be evidence in lesson plans and emails. However, as a school administrator I want to be able to have a quick snapshot of what is happening in any and all classrooms. Whatever tools the teachers are using for curriculum tracking and mapping, the integration team should be using as well.
Let’s talk about Atlas Rubicon.
I am not an Atlas Rubicon pundit. However, when I work with their developers, and I have a clear report driven goal, I get results. In other words, my school gets results.
If a school is running a technology integration model, and the school is using Atlas or any other curriculum tracking system, there should be a mandate to track integration technology and cross-curricular integrations. The latter is often forgotten, but it goes hand-in-glove with integration technology.
This what modifying an Atlas template to track technology integration and cross-curricular integrations would look like:
Here is what a simple report looks like in terms of data entry on every subject, and every map:
I can run this report for all year levels in about 10 minutes. I can review the work planned and the work that has already been done. The cross-curricular report is great for discussions. The data is simple, and could easily be part of monthly department level meetings.
With systems like Atlas, the integration team can have access to all the maps. They can make certain this data is collected, they can add notes, and they can contribute to the reflections.
Question 3: Whose fault is it?
This is the question that was on my mind for a few weeks after the initial experience. Upon reflecting on programs that I have been involved in, and programs I am currently involved in, I decided the blame was always clearly on my shoulders when accountability was lacking. As the administrator providing oversight I should require accountability on a level that is reportable, encapsulated, and not taxing for the educational technology team.
If I am the person who leads or supervises the educational technology planning, I should be assigning metrics for minimum contact time, maximum exposure, and differentiation. The technology integration team should be focused on delivery an excellent program to meet those metrics.
I guess the question is, do we know what we don’t know and how can we find out?