Week 3 Response—Shelley
In “Putting Education in “Educational” Apps,” Hirsch-Pasek et al. state, “when we process information that is more meaningful, we often (though not always) are more mentally active, making more connections across brain areas.” This statement is followed by a series of questions for designers to consider when building educational apps: “Does the app experience tap into the child’s personal history, activate prior knowledge of a subject, or build a rich narrative? Does it extend important interpersonal experiences with parents, siblings, or peers? How does it connect to the child’s role in his or her school community and, ultimately, to related domains of knowledge, such as science, mathematics, or history (cf. Rogoff, 1995)?” (p.13)
As a learning technology designer, these are both really compelling and challenging questions. Though the concept of tapping into personal experience and/or history to activate prior knowledge is a powerful one, designing scalable tools that capitalize on personal knowledge is difficult. How do we know what personal experiences learners have had? How do we create tools that can accommodate a wide variety of narratives? How do we design platforms that are nimble and structurally sound enough for users to personalize them for themselves?
For example: I have a 16 month old nephew who is learning to speak. Much to my chagrin, he says mama, dada, nana, and papa, but not aunta (his soon-to-be name for me). He’s learned to interact with the buttons on the iPhone quite well and gets quite a kick out of pausing and/or hanging up me via the big red button on FaceTime. Recently, I daydreamed of an app in which my sister could upload photos of the whole extended family, or any other person/place/thing, and then record herself saying the names/words that correspond with the photo. My nephew would then see a screen with that photo and hear an audio recording of my sister pronouncing the word. Immediately after, he’d be prompted to tap a red button and, with a Siri-like voice recognition function, repeat the word back. Ideally, if he said the word successfully, the photo would then turn into fireworks or make a celebratory noise as a reward.
This is an example of a tool that would take advantage of context—it’s entirely personalized and builds on prior knowledge. And yet, one can imagine great challenges in creating a vocal recognition platform that would match my sister’s voice and pronunciation to that of my nephew, and accommodate any number of other parents and their babies too. My example was of course a specific one, and there are tools that better fit this premise, but the point was to illustrate the challenge—once we introduce customizable elements, the technology itself becomes much more complicated. I think there is an incredible need for tools with this kind of adaptive ability, I would love to learn more about how to personalize these kinds of learning technologies for large, diverse audiences.