Week 5 Annotation

van Deursen, A., & van Diepen, S. (2012). Information and strategic internet skills of secondary students: A performance test. Computers & Education, 63.

Here we have a quantitative observational study of Dutch secondary students on the topic of internet skills. The authors point out, and I agree, that many educators assume that students younger than themselves have a firm grasp on technology–in fact, perhaps a firmer grasp than the educators themselves. Van Duersen and Van Diepen helpfully break internet skill into four categories: operational, formal, information, and strategic (2012, p. 1). It turns out that Dutch secondary students have solid skills in the operational and formal skills (i.e., they get the fundamental navigation involved in browsing the web), but they are found lacking in information and strategic skills, which involve reflection on their own methods and problem solving. In this regard, I would say that Dutch students sound much like American students. I have had to caution my teacher peers many times about assuming their students were all internet savants. Especially as smart phones replace PCs, even some of the internet competence confirmed by this 2012 study may be eroding.

In the theoretical background, the authors note several gender differences in internet search behavior. Boys tended to search faster, using shorter phrases. Girls tended to use more keywords and dwell on one source longer.  I was a bit skeptical of the authors’ claim that these somewhat provocative gender differences “did not result in large differences in the actual results” (2012, p. 3). Indeed, in their regression analysis, no large discrepancy was noted, but I still remain suspicious of how these seemingly significant differences in search patterns were dispensed with so easily.

I find this article potentially useful for my own research, as it lays out a solid blueprint for how to conduct a quantitative observational study with a reasonable number of participants (n=54). The theoretical background  employed by the authors by Van Duerson and Van Dijk gives me a model of how I might deploy a theoretical framework in my own quantitative study.

Week 4 Annotation

Hoepfl, M. C. (1997). Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9, 47–63.

Hoepfl’s article is a great starting point for those of us who are just dipping our toes into the ocean of research, as she makes a strong case for the benefits of qualitative research over quantitative research in the field of educational technology. From other sources that we have read, I get the impression that although randomized experimental studies are favored by those who implement policy, qualitative research is gaining popularity among those who work in the field of education, and this author does an excellent job of explaining why. Hoepfl points out that “statistical research is not able to take full account of the many interaction effects that take place in social settings” (Hoepfl, 1997, p. 48). Qualitative research is useful for fully understanding the context of a research situation, bringing to light what the variables actually are, and thus opening the door for the precision instrument of quantitative research in the future (Hoepfl, 1997, p. 49).

The only criticism I would level here is that the author often uses positive terms to describe the ideal qualitative study. Words like “rich,” “complex,” and “dynamic” are sprinkled through the otherwise straightforward description of qualitative methods (Hoepfl, 1997, p. 48, 55). In contrast, the author dispenses with purely quantitative studies, implying that such studies don’t capture the same complexities that qualitative studies do (Hoepfl, 1997, p. 48). To her credit, the author does endorse the idea of mixed methods studies.

This primer is useful for me in a macro sense. With each study we read, I am thinking of course about my own contribution to the field, and one of those questions is whether I want to go the qualitative or quantitative route. This primer gives me some concise, numbered steps to take when I do start my own research, and also gives me a guide for evaluating the merits of educational technology research I encounter.

Journal Review: IJITDL

This week, I’m going to submit my first journal review. I’ve been doing some preliminary searches to find a journal that fits with my job and research interests, and when I found the title of this one, I knew I had to give it a look.


International journal of instructional technology and distance learning.

https://cmich-primo.hosted.exlibrisgroup.com/primo-explore/fulldisplay?docid=01CMICH_ALMA51614983320003781&context=L&vid=01CMICH&search_scope=EVERYTHING&tab=jsearch_slot&lang=en_US

The International Journal of Instructional Technology and Distance Learning (mercifully shortened the IJITDL, which is still a mouthful) has been around for 13 years now. The journal is dedicated to promoting research in the field of distance education for educators, researchers, instructional designers, content creators, and administrators. They seem to focus mainly on education, but some of their content would also apply to training in the business world. Looking at sample issues, they do live up to the “international” stated in the title, as I see many studies from Jordan, Tanzania, and Saudi Arabia, just to name a few.

This journal looks like it would apply directly to my career as Director of Distance Learning. My position actually incorporated more academic technology than my department’s title indicates, so this journal’s scope is right in line with my interests. I see studies about mobile phones increasing campus awareness among students, randomized experiments about instructional design methods, and studies on the effects of self-paced learning. I like what I see so far from the IJITDL.

Week 3 Annotation

Salomon, G., & Perkins, D. (2005). Do technologies make us smarter? Intellectual amplification with, of and through technology. In R. J. Sternberg, & D. D. Preiss (Eds). Intelligence and technology: The impact of tools on the nature and development of human abilities (pp. 71-86). Mahwah, NJ: Lawrence Erlbaum Associates.

Salomon and Perkins tackle the issue of whether technology makes us smarter. As the authors point out, “literacy has been claimed to modify minds” (p. 72). Helpfully, the authors supply a definition of “smartness,” eschewing raw computational power and instead focusing on cognitive performance (p. 73). With this definition in mind, they approach the problem in three ways. The first is effects with technology; that is, how technology can amplify performance while the technology itself is present (p. 74). Second is the effects of technology, meaning that the effects persist even without the technology present. The third, and most provocative, is the effects through technology, which are large-scale changes in entire learning environments as a result of technology.

I found the structure of the article to be a strength. Examining three different effect types of technology brings to mind many examples from the past and forecasts for the future. For effects with technology, I have visions of cybernetic implants that add raw computational power to human critical thinking. For effects through technology, I imagine parts of our brains lighting up like Christmas trees when some unforeseen technology engages a long dormant area. For effects through technology, I think about how tablets have changed the physical layout of some classrooms, and how the next innovation will change that physical space again. Aside from one historical quibble*, I found no fault with this article.

As for how it relates to my own research interests in LMS design, I could easily see different LMS’s affecting student achievement. I might not put it in the category of “smartness,” but I certainly can see how different tools (widgets, plugins) could affect student performance on, say, a discussion post.

 

*My historical quibble is the example of a technological advance “from the longbow to the crossbow to the rifle” (p. 79). The crossbow was not an advance from the longbow–the crossbow came first. The crossbow was an advancement from the regular bow. As the Hundred Years’ War proved, the English innovation of the six-foot longbow made the French crossbowmen look obsolete, and ultimately led to massive surprise victories for the English at battles such as Crecy and Poitiers.

Bonus Mini-Annotation

I couldn’t help myself, so here is a bonus annotation from this week. The Roblyer article prompted me to research data repositories (aka, dataverses), which led to an interesting article about how those dataverses are being refined for mass audiences.


Micah Altman, Eleni Castro, Mercè Crosas, Philip Durbin, Alex Garnett, & Jen Whitney. (2015). Open Journal Systems and Dataverse Integration– Helping Journals to Upgrade Data Publication for Reusable Research. Code4Lib Journal, (30), Code4Lib Journal, 01 October 2015, Issue 30.

This article details the process of using the SWORD2 protocol to interface with Harvard University’s publicly accessible Dataverse for open access publishing and sharing of research data. The Dataverse project has its roots in the social sciences (believe it or not), but is extending its reach into fields such as astronomy and biology. The barrier of entry seems to be the front-end interface, which the SWORD2 protocol, developed by the authors, is designed to penetrate.

This article touches on one of my strongest research interests, which is the open source movement for education. The Dataverse combines open access, open publishing, and open data on open source platforms, which create exciting possibilities of collaboration. At Campus Technology 2017, I attended a talk called “Developing and Implementing an Online Research Data Repository” by Ray Uzwyshyn, where he gave a hint of the tantalizing collaborative opportunities that data repositories could hold for the future of research. This is one area I am interested in exploring further. 

 

Week 2 Annotated Bibliography

Roblyer, M. D. (2005). Educational technology research that makes a difference: Series introduction. Contemporary Issues in Technology and Teacher Education, 5(2), 192-201.

In this meta-commentary about research in educational technology, the author implores colleagues to adopt baseline standards so that the field of ed tech can see the kinds of gains that medical research saw in the 1910s. This is the perfect article for a beginning doctoral student to read, which the author actually acknowledges: “The series should prove useful as a set of ‘how to’ directors for those just beginning their research effors (e.g., dissertation studies)” (p. 194). The author highlights some distinctions between research hard sciences and social sciences, qualitative and quantitative studies, and objective and naturalistic inquiries. Most helpful are the Five Pillars of Good Educational Research, which include significance (or as I like to think of it, the researcher’s answer to the question, “So What?”), the need for a solid theoretical framework, the need to choose appropriate research methods, the need for a structured abstract, and the need to identify future research needs.

The author himself states that one of the problems with educational technology is that the technology “change[s] so quickly that it is difficult to build a body of findings over time” (p. 193). Likewise, this primer is approaching its 13th birthday, which makes it an old man in the digital age. While the need for quality research is as relevant as ever, we are now two administrations removed from the political context of No Child Left Behind. I would like to see this particular section updated.

An article like this can only help structure my own research as I begin my doctoral program. One area that piqued my interest was the author’s commentary on cumulativity, and the researcher’s need to “reflect a clear indication of where the current study fits in providing the required information, what kinds of studies must follow, and why” (p. 197). This makes me think of the 21st century phenomenon of data repositories (aka, dataverses) used by large institutions to publish studies that might not have been appeared in scholarly publications because the results weren’t flashy or disruptive. These so-called “failed studies” would nevertheless prove useful for other researchers, and it is exciting to think of the possibilities that these data repositories might have on future research.