Agency, pertaining to planning and executing actions, is a core feature of the political landscape. Our study examines the temporal dynamics of agentic language in political online discourse during the 2020 U.S. Congressional Elections, spanning 180 days before and after Election Day, and before the Capitol Hill riots. We coded 495,252 messages posted by Democratic and Republican candidates on Twitter for agentic language, which was more prevalent in tweets of politicians who won elections. Temporal analyses revealed increased agency as critical political events approached, whether a planned democratic event (Election Day) or a sudden disruptive protest (Capitol riots). The study enhances our understanding of the role of agency expression in political social media communication. Politicians may strive to evoke agency among voters to encourage political engagement, and voters may be cautioned by our results about this subtle (possibly unaware) manipulative strategy.
Pertaining to goal orientation and achievement, agency is a fundamental aspect of human cognition and behavior. Accordingly, detecting and quantifying linguistic encoding of agency are critical for the analysis of human actions, interactions, and social dynamics. Available agency-quantifying computational tools rely on word-counting methods, which typically are insensitive to the semantic context in which the words are used and consequently prone to miscoding, for example, in case of polysemy. Additionally, some currently available tools do not take into account differences in the intensity and directionality of agency. In order to overcome these shortcomings, we present BERTAgent, a novel tool to quantify semantic agency in text. BERTAgent is a computational language model that utilizes the transformers architecture, a popular deep learning approach to natural language processing. BERTAgent was fine-tuned using textual data that were evaluated by human coders with respect to the level of conveyed agency. In four validation studies, BERTAgent exhibits improved convergent and discriminant validity compared to previous solutions. Additionally, the detailed description of BERTAgent's development procedure serves as a tutorial for the advancement of similar tools, providing a blueprint for leveraging the existing lexicographical data sets in conjunction with the deep learning techniques in order to detect and quantify other psychological constructs in textual data. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
A modern interdisciplinary analysis of social networks implies detecting and investigating relevant socio-psychological linguistic markers that carry insight on the nature and characteristics of the social discourse. Associating markers to specific words is a further important step, allowing for an even richer interpretation. By taking as a working example the social discourse in Twitter, we propose a scalable method called PageRank-like marker projection (PLMP) following a rationale inspired by PageRank to fully exploit the interdependencies in a semantic network, so as to meaningfully project markers from a social discourse level (tweets) to its semantic elements (words). The effectiveness of PLMP is shown with an application example on calls to online collective action.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više