By Julien Colomb | July 2, 2018
text obtained from: https://openadventures-blog.lib.cam.ac.uk
In a series of interview, data champions were asked to tell about their “happiest data moment.” Different answer were given.
David Marshall, @futurelib
Around two years ago we (Futurelib) finished the data gathering phase of a project, Protolib, looking at the design of physical study spaces. We had prototyped different study spaces based on the findings of a collaborative design process conducted with Cambridge students and researchers. We conducted hours (as in 300+ hours…) of observation in these prototype spaces, and gathered data in various other ways, such as interviews with people leaving the spaces, feedback walls, comment cards and questionnaires. The first thing we did as researchers after this was to brainstorm the insights we had arrived at from this work. To see themes and ideas emerging so quickly, and to see them backed up and added to by the research data, was amazingly fulfilling. This is what ‘sold’ me on the value of ethnographic techniques; we had immersed ourselves so fully in the environments under study that we understood them to an extent which I would not have previously thought possible.
Keren Limor-Waisberg, @TheLiteracyTool, @OpenResCam
My happiest data moment was during my PhD. I calculated the performance of some viral elements using different tests. I had a lot of data and it took a while for the scripts to run. It was nerve-racking. I can still remember sitting there listening to the screeching sounds of the computer. And then one by one I got the results, and they all confirmed my hypothesis. It was great. It was a small piece of scientific knowledge, but I was the first person in the world to know about it.
Melissa Scarpate
When I finally got my latent growth model to run!
Kirsten Lamb, @library_sphinx
I was pleased when I discovered that Web of Science does a lot of the analysis I wanted to do but thought I could only do if I had InCites or similar. As much as I like knowing what’s being measured and having an intimate knowledge of the data, sometimes it’s nice to just be able to click a few buttons and get a nice graph!
Dr Sudhakaran Prabakaran, @wk181
We are happy with the publicly available datasets. Our problem starts with the datasets we collect. How to store, analyse, and make it available for everyone to use are the questions we are trying to answer all the time.
through the answer, I also pick some interesting comments:
Written by Dr Sudhakaran Prabakaran @wk181:
I can run a viable research program with no startup money or funds just by scavenging through publicly available datasets.
Past US Presidents have made laws like any data generated with public funds should be made available. Governmental organisations should demystify cloud based storage and computation processes. People are unduly worried. People are giving away more personal data wilfully on Facebook, Twitter, Instagram than through genome sequences collected by public consortiums.