As a mostly quantitative researcher, I have found my recent engagement with qualitative scholars in the Digital Placemaking and Soft City Sensing DFF research network to be particularly illuminating.
Against the quaint backdrop of Copenhagen, with its expansive cycling lanes and elegant inhabitants, I was reminded of Enrique Peñalosa’s observation that “a developed country is not a place where the poor have cars. It’s where the rich use public transportation.” Or, in this case, cities where the rich cycle.
Navigating different academic communities constitutes a central aspect of my work, with varying degrees of success. At this event, I have had the privilege of engaging with an excellent cohort of ethnographers, science and technology studies (STS) scholars, media studies researchers, and qualitative human geographers.
We visited the anarchist commune Freetown Christiania in person for ethnographic work, and then examined social media data from the same location. Conducting ethnographic fieldwork in a particular neighbourhood, followed by social media analytics, proved stimulating, as I had never undertaken such work in close temporal proximity. These experiences of Christiania were truly remarkable; experiencing the place and conversing with residents provided an entirely different perspective on Instagram content produced there.
Excusatio non petita: These notes are incomplete, contradictory, unpolished, and may feature some worthwhile ideas originating from others, while the inadequate ones are mine alone.

Freetown Christiania is an anarchist commune in Copenhagen established in 1971 (Image source: ChatGPT)
Digital Humanities methods for qualitative inquiry
The hands-on session employing digital humanities and data science methods to inform qualitative inquiry into Christiania proved particularly interesting. For instance, network visualisation was deployed not to study patterns and subsequently formalise results through modelling, but rather to explore intriguing visual clusters or examine the connectivity of individual nodes. This STS approach represents a markedly different application of network visualisation from that typically employed by complexity science researchers. The inventor of the famous Gephi network tool (who also works in Copenhagen) wrote about these different types of network analysis.
Similarly, image analysis, clustering, and natural language processing (NLP) were utilised to explore social media content through a qualitative lens. This approach can prove somewhat liberating, as the objective was not to construct an explanatory model in the hope of detecting statistically significant relationships and patterns, yet also frustrating, as different researchers can interpret identical content from widely divergent perspectives, resulting in much semantic and epistemic vagueness.
Quali-quant or quant-quali?
The interplay between qualitative and quantitative methods is a recurring theme in the research method literature. It seems evident to me that any quantification must be preceded by qualitative inquiry into what we are measuring, what classification we are employing, and for what purposes (effective scientific classifications are invariably purposeful and provisional). This represents the typical workflow in quantitative research, wherein qualitative work precedes and enables the “proper” quantitative study.
However, one can also reverse this process: quantitative analysis can inform qualitative research. For example, the selection of ethnographic sites or interviewees might be based on network analysis or choropleth maps showing population statistics; case studies might be selected based on broad social media data; qualitative themes might be identified through topic models—assuming one can generate topics that are neither trivial nor uninterpretable. In this sense, quantitative methods might enable the “proper” qualitative research.
Naturally, these quali-quant oscillations can and should occur iteratively. In our museum work, we frequently revised classifications by observing peculiar patterns in quantitative analyses and visualisations, or by identifying edge cases that emerged as outliers not to be discarded but to be foregrounded as an integral part of the research. In some instances, these edge cases reshaped entire classification systems.
Concepts and pragmatism
In the workshop, the concept of “digital placemaking” proved useful yet exceedingly broad, like anything pertaining to “place”, encompassing how power, agency, and community are constructed in a locale. If nearly every use of digital media constitutes placemaking, then the concept loses its power. Does a tourist’s Instagram story from a café constitute digital placemaking? What about a local resident’s complaint on a Facebook group? Or a use of WhatsApp to organise neighbourhood events?
These conceptual boundaries are invariably difficult to delineate. Anders offered a useful suggestion: adopting a pragmatist approach whereby a concept is deemed valuable through its effects. This draws upon the philosophical tradition of pragmatism, developed by the likes of Charles Sanders Peirce and John Dewey in the late 19th century in the US. Pragmatists contend that philosophical concepts are best viewed in terms of their practical uses and successes, rather than through abstract definitional exercises.
Rather than seeking definitional purity for “digital placemaking,” we might ask: what work does this digital activity do in constructing, maintaining, or challenging place-based identities and relationships? Does it foster community connection, exclude certain voices, reshape spatial practices, or alter how places are understood and experienced?
Yet pragmatism is not without its limitations. Determining how a concept is “useful” or “effective” in a given context can be rather arbitrary too, ending up, to my taste, in an excessively relativist territory (but this is another story).
The situated nature of social research
Social research is invariably situated. Unlike mathematical abstractions, any analysis of social behaviours or processes represents the product of a particular cultural context and outlook. When discussing the workshop’s scope, it became readily apparent what was absent, i.e., the myriad ideas and perspectives that could not be represented.
However, declaring one’s specific and partial viewpoint arguably renders research clearer and more persuasive. As creative writers often advise: write about what you know, not about what you think ought to be important in abstract terms. Embracing a situated perspective, rather than viewing it as a flaw to overcome, represents an interesting approach to positioning oneself as a researcher.
Naturally, there exists a limit to this attitude. I believe researchers should, at least during certain phases of the process, attempt to detach themselves from their object and position, employing mechanisms specifically designed to examine aspects they might not perceive due to their pet theories, socio-cultural background, or ideological commitments.
This may be an unpopular view in some circles, as the social sciences have enthusiastically embraced standpoint theory. While this theory usefully highlights how one’s social position affects knowledge production, some interpretations have moved toward what critics call “identity essentialism”, where the researcher’s demographic characteristics are treated as the primary qualification for epistemic authority, brushing all other considerations aside.
When one embraces epistemic and methodological partisanship without even trying to moderate it, the entire research endeavour feels less persuasive to me, as it conflates activism and scholarship to an unproductive degree. I certainly recognise the impossibility of value-neutral social research in a strong sense, but I also don’t see how abandoning any attempt at objectivity would make one’s research more defensible and impactful outside of little sects of like-minded researchers.
Social media data beyond “bias”
One recurring theme in my work concerns the fact that social media data is never representative of wider society. Every platform possesses peculiar geographical, demographic, affordance-based, and thematic dispositions. An extreme example might be the comparison between Facebook (older users, diverse social classes, pages and groups with posts and comments) and TikTok (younger users, short videos, swiping functionality).
Going against my own usual terminology, I would argue that describing social media data as “biased” does not make a lot of sense. Bias is a concept useful in statistical analysis and survey design: a survey purports to capture a target population’s opinions, and deviation from this constitutes bias (e.g., a consumer survey about filmgoing might include insufficient low-income participants, women, or rural residents). Bias represents an undesirable gap between the desired population representation and the often disappointing reality of available data.
Social media platforms, by contrast, are not designed to “capture” specific populations: they are service providers appealing to different audiences for different purposes (e.g., LinkedIn for professional self-marketing, Instagram for entertainment, GoodReads for book reviews). Platforms might advertise to attract target groups (good luck bringing younger users to Facebook) and utilise detailed user knowledge for advertising purposes; in turn, advertisers strategically select platforms to reach target audiences. Importantly, platform behaviours vary enormously. Instagram Reels and Facebook posts represent different genres and are more likely to capture specific behaviours (e.g., promotional tourist messages versus lived neighbourhood experiences).
Consequently, social media data is not “biased”. It reflects preferences and orientations resulting from interactions between specific platform logic and users’ varied purposes. There is no ground truth to uncover.
Can we neatly separate research that uses social media data from research about social media? No platform possesses “unbiased” data suitable for understanding society in unadulterated form. One might construct representative surveys attempting to access specific platform samples; for example, studying male-female linguistic differences by sampling Facebook users. The underlying challenge remains that platform users are strongly self-selected, representing particular populations rather than general demographics.
What constitutes a better term than “biased” to describe this intrinsic aspect of social media data? Perhaps “situatedness”? This term, borrowed from feminist epistemology and STS, acknowledges that all content is produced from particular positions without treating this as a flaw to be corrected.
The unfathomable value of context
In conclusion, what emerged from the interaction between fieldwork and social media analytics of the same locale was the necessity of knowing a “platial” context to articulate anything meaningful about it. This might well be a platitude, but not in some quantitative circles.
I recall, years ago, chatting with a data scientist about this analysis of millions of tweets from a Global South country using natural language processing. When I asked, “Have you ever been there? Talked to any of the users?”, he looked genuinely puzzled and replied, “No,” as if to say, “Why on Earth would I do that?”