thread for references and discussion on open science, open hardware, legitimacy, documentation, projects’ impacts
Powell, A. (2016). Hacking in the public interest: Authority, legitimacy, means, and ends. New Media & Society, 18(4), 600–616. https://doi.org/10.1177/1461444816629470
Powell, A. B. (2012). Democratizing production through open source knowledge: From open software to open hardware. Media, Culture & Society, 34(6), 691–708. https://doi.org/10.1177/0163443712449497
Powell, A. B. (2015). Open culture and innovation: Integrating knowledge across boundaries. Media, Culture & Society, 37(3), 376–393. https://doi.org/10.1177/0163443714567169
Hi, Thanks for sharing them and sorry for the late answer! I am reading them now.
I am interested specifically in the idea that citizen science seems to focus a lot in a whole process led by a scientist, and not necessarily guided by communities involved. That is, they solve a problem and a hypothesis already decided by a scientist.
My proposal right now is on methodologies that communicate problems and hypotheses based on perceptions of citizens in two ways: to validate citizen’s initial conceptions, and to have citizens interiorize scientific concepts that are harder to grasp but are needed for these processes.
For example, community maps and determining falsable hypotheses before entering the use of tools for measuring. These can also be useful for problems that are difficult to measure, such as social sciences (I’ve never heard of citizen social sciences!).
Does this make sense?
yes! this makes perfect sense.
I do think that there is a lot more citizen science out there being both performed and being observed, it just means looking in the right places. (in academic publishing: in Science & Technology Studies, environmental social sciences, anthropology, etc.). it is not only scientists giving work to laypeople to go and gather data.
one big argument put forth in these articles is that these citizen science initiatives ARE worth studying because no institutional actor is bothering to look at the problems the groups want to look at and think are important. (Noortje Marres says this in a few of her articles, chapters, for example.) as for your question “methodologies that communicate problems and hypotheses based on perceptions of citizens in two ways: to validate citizen’s initial conceptions” - that is a good question. I’m guessing that this kind of work is random and fragmented and groups don’t know of each other’s work and don’t share knowledge. if I come across something, I’ll let you know.
as for “have citizens interiorize scientific concepts that are harder to grasp but are needed for these processes”, I’ve heard of projects that have a ‘science teaching’ component. you’re right, they are usually about the natural sciences, not sure about social science skills, approaches and theory/knowledge that would be helpful.
more background reading: I like this one by Lave, the section on ‘History and Forms of Extramural Knowledge Production’
and Democratising expertise and socially robust knowledge
https://academic.oup.com/spp/article/30/3/151/1628314 DOI 10.3152/147154303781780461
You know, that is exactly what I just found on Powell (2016), and found really fascinating:
The crowdsourcing dynamics that are the subject of Mansell’s inquiry often create a power imbalance whereby “lay” contributors to crowdsourced scientific projects are positioned as amateurs and as data sources, rather than as collaborators.
I do think that these processes don’t reduce people to gather data, but not participating in problem definition for lack of expertise. I wonder about this because it is common in some international development approaches as well. Participatory work is hard!
I found the 2016 Powell article very useful: how means and ends can easily be confused and it is more strategic to articulate means and ends - and how ‘legitimacy’ plays out in open hardware, including how ‘accurate’ measuring devices and data need to be - and so on.
‘participation’ and participatory stuff IS hard, and, on the one hand, we shouldn’t over-romanticize it, invite a bunch of people into a room and ask them to do random stuff - and expect some outputs. on the other hand, getting people involved in stuff, in real, hands-on activities, dealing with real materials, is valuable without question. people learn and become involved and engaged, in ways that are not so easy to detect and describe in academic articles.
I’m just having a look at this 2011 article, relatively well cited, that puts citizen science projects into a typology.
https://ieeexplore.ieee.org/abstract/document/5718708
DOI: 10.1109/HICSS.2011.207
on participation more generally, I am revisiting this article
Too Much Democracy in All the Wrong Places: Toward a Grammar of Participation
https://www.journals.uchicago.edu/doi/full/10.1086/688705
a good essay
Bonney, R., Phillips, T. B., Ballard, H. L., & Enck, J. W. (2016). Can citizen science enhance public understanding of science? Public Understanding of Science, 25(1), 2–16. https://doi.org/10.1177/0963662515607406
Me too! I really liked it. As I see it, there will always be questions regarding the authenticity of citizen science compared to “regular” science. I think that the whole issue should go in the line of using these tools as means for developing scientific thinking: [please don’t kill me] citizen science being the ‘design thinking’ of scientists. Teaching people about establishing a hypothesis, measurement, falsability and reproducibility, through the use of simple tools. It doesn’t have to be profound, but it should be well done by individuals who understand what they know and what they don’t know yet can be very valuable.