Research Continuity

 View Only
  • 1.  Resources for getting started working with participants online

    Posted 03-20-2020 15:33

    A lot of us right now are figuring out how to continue our research given the need for social distancing. I run Lookit, an online platform for asynchronous webcam-based studies with kids, and have recently heard from a number of groups interested in exploring online testing. Below are some thoughts about the transition and an invitation to get started using Lookit if you're interested.

    We have been planning to open the platform up this spring for anyone who wants to use it, and are working as fast as we can to finish up the last critical pieces. However, given recent events, we also want to invite anyone who's interested in running studies online to get started. Here on our wiki is an overview of the concrete steps that will entail, and here is the documentation with information about how using Lookit works. There is a small chance you may end up with a study ready to go before we are ready to go live, in which case we will do our best to accommodate you!

    Some cautionary thoughts on timing to expect

    To be clear, though: this is unfortunately NOT the quick fix you might be hoping for. It's not, for instance, a way to keep collecting data over the next few weeks. The steps involved are more like setting up a new research lab in a museum or field site than like posting a study on Amazon MTurk. If you started immediately, and everything went smoothly (which, as we know, always happens in science) the soonest you could launch your study would be well over a month from now -- with much longer delays possible depending on how long it takes you to get an institutional agreement signed by an authorized signer from your institution. An approximate timeline is included in the steps linked.

    I also want to echo the caution of colleagues who already teach online, as they watch the rest of us switch to remote instruction with weeks or days to prepare: online research requires distinct skills and a lot of work. Videoconferencing sounds more promising as a way to quickly and directly adapt in-lab studies, but my strong impression is that doing that well also requires a substantial investment of time and energy. On that front, you might be inspired by Mark Sheskin's work developing, which uses Adobe Connect.

    Finally, online recruitment also remains a challenge, so even when your study is posted, please know that data might only come in relatively slowly after that. However, in contrast to lab research, we can all benefit from all efforts at recruitment. Participants who come to the site and complete one study are very likely to come back and complete other studies, so we believe that the participant pool will grow fairly rapidly once we gain some momentum.

    I completely understand that many of us are under pressure to finish up a study this term and just need a quick solution; I hope some of the other ideas below will be more appropriate in that case, and that other people have advice on that front. I especially feel for folks who have invested a lot of time and energy in longitudinal studies whose next timepoint(s) will be disrupted, and which may or may not be translatable to an online environment at all or in time.

    But in my experience it is also easy - and incredibly common - to feel a false sense of urgency about our work. One of the reasons I caution people about the timeline for getting Lookit studies up and running is how often people have told me they really need to have a study up and running by, say, 8 days from now, with data collection completed 43 days after that. And I say "hmm! I strongly suspect it might take longer, but you never know!" and make sure there are no technical obstacles that will be the limiting factor. And then 18 months later, they have IRB approval and finalized stimuli. Maybe this is exacerbated for online studies on Lookit since they are all side projects for our beta testers; this is just the particular window I get into other developmental labs. And the world does not collapse because we did not test the babies fast enough. In fact, those studies are much improved by the thought that goes into them in the meantime.

    Maybe testing online really is what you need right now, or this feels like an opportune time to dive in - in which case, we'd be genuinely delighted to welcome you to Lookit! But maybe what you need right now is some time to write up results, come up with a bunch of new experiment ideas, finish your preregistration, work on a secondary analysis project, finally get comfortable using R, take an online Bayesian stats course... or take care of your family. It seems very unlikely that any of those will be worse choices in the long run than frantically figuring out how to keep collecting data.

    Other approaches for online testing

    I hope people who have used these approaches can chime in with ideas and resources!
    * Videoconferencing with families, using the same recruitment & scheduling approach as in the lab, after setting up IRB approval and a way to present stimuli & store data
    * Testable just opened up their platform for free for the rest of the academic year! (I don't have personal experience using it, but it seems like a worthwhile tool to explore for experiment design; I'm not sure if the participant pool side is now free)
    * Survey-based studies

    I also think there is a real opportunity here to think about ways to provide direct value to families right now, too, especially for videoconferencing studies. Can your study be part "Skype a Scientist," part experiment? Is there a fun 5- to 10-minute lesson or activity you could teach remotely?

    Kimberly Scott

  • 2.  RE: Resources for getting started working with participants online

    Posted 03-26-2020 16:24
    Hi Kimberly,

    Thank you for your post. I think it's incredibly useful to explore different approaches to collecting data.

    I have a new study with Thalia Goldstein on Santa where we've pieced together an approach for collecting data completely online. We are interested in interviewing children who have stopped believing in Santa Claus sometime roughly in the last six months, and it's hard to find all of those children locally.

    We set up a brief information page about our study online:

    From there, parents can follow a link to a scheduling service website to choose a time to participate. Once we've received a confirmation of their appointment, we send them a link to a REDCap consent form so that they can a) provide parental consent and b) review the interview questions for children to see if there are any questions the parents would like removed from the interview. We then use Skype for the interviews themselves. We're storing the audio recordings of the interviews on Databrary, and after the interview is complete and parents complete a separate Qualtrics survey, we send electronic gift cards.

    It took some time to get everything set up, but we've certainly been able to find more participants than if we had only focused on in-person testing. That said, I think it's still challenging to find parents who both a) have a child who qualifies and b) are willing to participate. I've posted on Facebook and Twitter, and I've had friends of friends share the brief ads. I've started reaching out to friends who have their own labs to see if they could share our ads (and vice versa), but we need to think about creatively about how to recruit in moving forward. Do you have any suggestions?

    Candice Mills
    The University of Texas at Dallas
    Twitter: @CandiceMMills

  • 3.  RE: Resources for getting started working with participants online

    Posted 03-31-2020 15:51
    In our experience, online recruitment is perpetually surprising in its difficulty. It really feels like opening up the participant pool so much, and reducing barriers to participation, should lead to an immediate flood of kids. Or as if it's just a matter of actually maintaining a social media presence / running a few ads / talking to a reporter / etc. We've had a few enthusiastic undergrads focus just on social media & advertising as I slowly come to terms with the fact that it's not actually that easy :)

    I still very much believe it's doable! Just want to validate the feeling of "this really can't be that hard" as data come in slowly.

    You have an interesting challenge in that even people who participate are pretty unlikely to know anyone else who's eligible - whereas if you want 3-year-olds or even 7-month-olds you might have a good shot at spreading the word via past participants. Some of the other ideas we'd consider are:
    - Google AdWords (using it to its full potential is a skill in its own right, but you can start with a budget of $20 or so and babysit your ad variants closely)
    - Facebook ads (less tuneable based on parents googling e.g. "when do kids stop believing in santa" but easy & lots of parents)
    - Reaching out to local media about a feature story (possibly starting with university press folks if they'll help)

    For studies with less specific criteria, it's generally been most productive for us to encourage sharing in parent groups e.g. on Facebook. Once we have more energy for this I suspect more formal schemes to reward families for helping to spread the word, and working with individual labs to establish relationships with local institutions (libraries, kids' play areas, community centers, etc.) that can help will be helpful.

    Taking a step back, studies like this with very narrow inclusion criteria are going to be the big winners from discipline-wide recruitment and/or testing infrastructure. All of us recruiting for our own studies is far less efficient than we could be together - imagine all the people you will have reached by the time you finish this study who thought "oh cool! but my kid still believes in Santa," or "oh cool! but my kid JUST stopped believing in Santa which is why I was reading this article," etc. and who would have loved to participate in some other study :)

    We have some documentation of various recruitment efforts on the Lookit wiki at - it's not a lot but may give a feel for e.g. where a few other groups have ended up on cost per participant for ads.

    Kim Scott
    Research scientist | MIT Early Childhood Cognition Lab
    Participate: Learn more: