close up photo of matrix background

A Research Note

Abstract

Annual expert-coded democracy indices remain the scholarly gold standard for comparative research. However, the temporal granularity they afford is increasingly mismatched with the velocity of contemporary democratic erosion. This research note proposes a systematic expansion of measurement approaches that can track episodes of regime transformation at weekly or monthly intervals and at the level of provinces and municipalities. Drawing on recent innovations in automated news coding and multilingual machine learning, and examining the RealTime DemTrends project as a proof of concept for the United States, it maps a research agenda for cross-national and subnational extension. The note engages critically with the validity challenges this entails, including the media capture paradox, cross-lingual bias, and aggregation problems. It proposes how non-textual alternative data can address the limits of news-based monitoring in consolidated autocracies. The implications concern not only measurement approaches but the reconceptualisation of democratic erosion as a continuous, spatially differentiated process.

1. The Problem of Temporal Resolution

The study of autocratization has motivated a substantial empirical literature anchored by large-scale data collection efforts, of which the Varieties of Democracy (V-Dem) project is the most comprehensive. V-Dem works with over 4,000 country experts to rate 273 indicators across several conceptual dimensions of democracy, converting their ordinal judgments into continuous latent scores through Bayesian item response theory models (Coppedge et al. 2020). The result is a documentation of global regime variation in space and across time. However, the dataset is released annually, and this cadence is no longer adequate for the phenomena scholars most need to monitor.

Contemporary democratic erosion differs from the coups that prevailed in the twentieth century. Lührmann and Lindberg (2019) establish that the current wave of autocratisation proceeds incrementally, through legalistic and often deliberately opaque mechanisms, regulatory capture of media, executive pressure on judiciaries, and weaponised electoral administration. Bermeo’s (2016) typology distinguishes varieties of democratic backsliding that unfold at different speeds. The outcomes of coups become visible relatively swiftly, whereas executive aggrandisement unfolds over years through incremental institutional changes, each too small to be flagged in any single period. Sato and colleagues (2022) show empirically that the order in which democratic attributes erode matters causally for whether autocratisation succeeds, but annual data collapses sequencing into a net score change, discarding the temporal information needed to test such claims.

This epistemological gap has prompted a parallel debate about whether existing instruments identify what they claim to detect. Little and Meng (2024) argue that objective indicators of electoral democracy, covering incumbent turnover, multiparty election competition, and evasion of constitutional term limits, show little evidence of the global democratic regression that expert-coded indices report. Their explanation is a form of time-varying coder bias, by which experts sensitised to democratic fragility judge contemporary events more harshly than equivalent events from earlier decades. Defenders of expert-coded data respond that objective indicators capture only a thin, minimalist conception of democracy and miss the de facto institutional degradation that defines what Varol (2015) calls stealth authoritarianism. Practically speaking, an aspiring autocrat can preserve the outward form of electoral competition while disabling its substance, and only expert judgment embedded in the specific political context will detect the difference. Weidmann (2024) shows, using time-stamped V-Dem coder data, that recent dramatic events in a country are associated with small but detectable shifts in retrospective coder ratings, primarily for repression-related indicators, which is consistent with a narrow form of availability bias. Taken together, both insights point to the need for denser temporal data that can separate short-lived legislative shocks from multi-year, diffuse declines.

2. The Subnational Blind Spot

Parallel to the temporal problem is an equally consequential spatial one. Standard democracy indices assign a single regime score to each sovereign state, an assumption of territorial uniformity that comparative politics has progressively dismantled. The study of subnational authoritarian enclaves, first prominently articulated by Gibson (2005) for Latin America and later extended to other regions, shows that democratisation is rarely a uniform process. State-level and provincial regimes in Argentina and Mexico have diverged considerably from national patterns, operating under distinct political logics sustained by clientelistic networks, political dynasties, and asymmetric resource distribution, among other aspects (Gervasoni 2010; Giraudy 2010). Other polities such as Brazil, India, and the United States can be added to this list.

Grumbach (2022) and Grumbach and Bitton (2024) provide systematic evidence for the American case. The State Democracy Index applies Bayesian item response theory to 51 indicators across all 50 US states from 2000 to 2023, finding that Republican-controlled state governments are the dominant predictor of declining subnational democratic performance. The finding has attracted controversy about weighting choices, but the existence of pronounced subnational variation within a consolidated democracy is now empirically established. Pérez Sandoval’s Index of Subnational Electoral Democracy (2023), covering nine countries across the Americas and India for a period of about 40 years, is the most geographically ambitious subnational democracy dataset to date. His subsequent work on multilevel regime decoupling documents that the proportion of cases in which national and subnational democratic trajectories diverge has increased since 1990 (Pérez Sandoval 2025). This finding should give researchers pause about what national-level annual scores actually measure in federal or decentralised polities.

Subnational governments can act as safeguards against autocratisation. When aspiring autocrats pursue executive aggrandisement, they frequently encounter opposition-controlled governorships and mayoralties with the administrative, legal, and fiscal capacity to resist and litigate. The Subnational Elections Database, covering control of the highest subnational executive offices across 84 democracies between 1990 and 2024, provides evidence that subnational electoral alternation functions as a meaningful brake on national autocratisation. V-Dem’s subnational indicators capture country-level expert assessments of variation across regions without providing unit-level scores for individual provinces, leaving subnational mechanisms difficult to quantify in a comparative manner.

3. High-Frequency Monitoring in Practice

The RealTime DemTrends project, developed by Marcia Mundt at Northeastern University, provides the most elaborate example of weekly democracy monitoring currently available. The project tracks 39 factors nested within 13 dimensions of political accountability in the United States, organised around a tripartite distinction between vertical, horizontal, and diagonal accountability channels (Mundt 2025). Each week, thousands of news articles drawn from 15 ideologically balanced news outlets are processed by an ensemble of large language models and classified against a scoring rubric of hypothetical scenarios representing minor, moderate, and significant shifts in each factor on a scale from negative three to positive three. Human researchers review AI outputs at each stage, cross-referencing for source balance and classification consistency.

The report for the week of March 9 to 15, 2026, found 23 of 39 factors showing concern and only 2 showing progress. The analytical value of the approach lies in the causal tracing that the format enables. The report documents how a single macro-decision, namely the initiation of military operations against Iran without congressional authorisation, propagated accountability failures across 13 distinct factors, from executive precedent and media freedom to social cohesion and judicial review. The Pentagon’s decision to exclude photographers from briefings constrained diagonal accountability precisely when public demand for information about the school strike casualty figures in Iran was highest (Mundt 2026). The Federal Communications Commission chairman’s threat to revoke broadcast licenses over coverage the White House characterised as distorted came in response to presidential criticism, illustrating how executive pressure can instrumentalise formally independent regulatory bodies. Annual data would record, at best, a notable year for press freedom deterioration. Weekly data keeps track of the specific event sequence through which that deterioration occurred.

The project situates its scoring alongside expert-based measures such as Bright Line Watch. Bright Line Watch (Carey et al. 2019) conducts quarterly expert surveys on 27 democratic principles, and its assessments are drawn from a pool of political science experts that may overlap with the scholarly community whose judgments the Little and Meng critique relates to. The more demanding test would compare RealTime DemTrends measurements against V-Dem’s expert-coded scores for the same period once those become available, and against observable institutional outcomes such as court rulings, legislation enacted, and documented civil society activity.

4. Cross-National Extension and Its Requirements

The cross-national potential of high-frequency event-based monitoring is real, but the obstacles are serious enough that a research agenda glossing over them will not persuade a sceptical methodological audience.

The first problem is the monolingual bottleneck. The overwhelming majority of automated political event coding systems have been built on English-language text, relying on international wire services and major Western publications (Liang et al. 2018). This promotes a bias in which internationally visible events in English-speaking countries are over-counted while subnational incidents, local corruption scandals, municipal electoral irregularities, and low-level civil society intimidation in smaller or non-Western states are missed. Recent advances in multilingual transformer models offer a partial solution. Models such as XLM-RoBERTa and multilingual BERT enable classification of political texts across languages without requiring intermediate machine translation, which can introduce semantic distortions, particularly for politically and legally specialized vocabulary. Domain-adapted models such as ConfliBERT deliver superior performance on conflict-related classification tasks (Hu et al. 2022). More recently, regionally trained models such as SEA-LION reflect the turn toward linguistically and contextually grounded political text analysis (Carnegie Endowment for International Peace 2025).

Cross-lingual bias, however, remains beyond vocabulary. LLMs trained predominantly on Western corpora embed normative assumptions about what democratic behaviour looks like that may not transfer to polities with different institutional histories. Weidmann et al. (2025) show that LLMs approximate V-Dem expert codings but exhibit model-specific biases (consistently over- or underestimating democratic quality), making unreflective reliance on AI-generated scores methodologically problematic. The more defensible approach is what Halterman and Keith (2025) call codebook-LLM measurement, in which models are used as annotators constrained to follow explicit social science codebooks without drawing on the background concepts absorbed during pre-training. Testing several open-weight models on three real-world political science datasets, they find that zero-shot performance is weak to marginal across complex classification tasks, and that models tend to rely on the semantic content of label names, not the codebook definitions themselves. Supervised fine-tuning substantially improves accuracy but at considerable annotation cost. For cross-national democracy monitoring, a classification pipeline deployed without explicit codebook anchoring and iterative validation will likely produce outputs that reflect a model’s pre-trained intuitions about concepts like judicial independence or media capture rather than the operationalisations political scientists have developed for them.

There is, however, encouraging evidence on what well-designed pipelines can achieve. Analysing a dataset of 235 party manifestos across 21 languages, Benoit et al. (2026) deploy an ensemble of large language models to measure issue positions, achieving Pearson correlations of .87 to .92 with expert survey benchmarks. This outcome functionally matches the theoretical upper bound of human expert-to-expert agreement. Two design choices drive this validity. First, prompting models to generate 200- to 300-word English summaries before quantitative scoring anchors the metric in a concrete textual representation. This causal step decomposes the extraction task, preventing the attention drift that occurs when querying models directly against 150-page documents. Second, aggregating these individual scores into an ensemble mean systematically dampens stochastic model variance. Separating theory from empirics, the authors also test predictive validity on 23 physical coalition policy agreements. Spatial models of government formation theoretically predict that a coalition’s policy must fall within the ideological range of its member parties. Empirically, the LLM-generated estimates fall within this predicted range significantly more frequently than traditional hand-coded estimates, demonstrating that natural language approaches can outperform human sentence-counting methods.

The Machine Learning for Peace project at DevLab@Penn provides a large-scale proof of concept. Using over 120 million articles in 36 languages across 65 developing countries, it classifies 20 types of civic space events with roughly 82% accuracy and has predictive power for U.S. State Department Level 3–4 travel advisory onset up to six months ahead (Moratz et al. 2025).

The cross-national comparability problem in expert-coded data arises from variation in rater thresholds and remains only partially resolved. V-Dem addresses this through a combination of anchoring vignettes which are hypothetical cases rated by all experts to align scale thresholds and cross-national linking strategies such as bridge and lateral coding. Lateral codings, where experts rate additional countries, are incorporated into the measurement model by treating them as anchoring vignettes to improve cross-national calibration, although remaining cross-national comparability issues persist (Pemstein et al. 2025). Event-based scoring systems do not typically incorporate explicit anchoring procedures to address cross-national differences in interpretation, which creates risks of non-equivalent measurement across contexts. While this concern is conceptually distinct from extraction accuracy, large-scale event datasets such as GDELT illustrate the additional challenges posed by automated coding. Key field accuracy is approximately 55 percent and data redundancy can reach 20 percent (Hong et al. 2025). These limitations pinpoint the need for careful preprocessing and caution in interpretation, particularly for fine-grained cross-national inference.

5. The Media Capture Paradox and Alternative Data

There is a deeper problem afflicting all news-based democracy monitoring, most acute in the cases where real-time measurement would be most valuable. Observational indicators of repression often suffer from reporting bias because anticipated state sanctions induce self-censorship (Skaaning 2018). Applying automated text analysis to captured media in consolidating autocracies distorts the data distribution. Because the regime suppresses independent coverage, an automated monitoring system built on news text would record the artificial drop in negative reporting as a neutral or positive signal. Consequently, this measurement practice inverts the empirical relationship between democratic quality and measured output, registering a democratic improvement precisely when the regime is most effectively silencing dissent. Freedom House’s Freedom on the Net 2025 report documents a consistent global decline of internet freedom, with state actors increasingly deploying automated tools and AI to manipulate online information, conduct automated social media monitoring, and build censorship into domestic AI models (Freedom House 2025). In such settings, text-as-data approaches risk measuring the effectiveness of information control, not the state of democratic institutions.

Addressing the media capture paradox requires triangulation with non-textual data that autocratic regimes cannot easily manipulate. Satellite imagery provides the most direct avenue. Nighttime light intensity from instruments such as DMSP-OLS (Henderson, Storeygard, and Weil 2012) and, more recently, VIIRS, has been established as a reliable proxy for subnational economic activity in states where official statistics are unavailable or doctored. Comparing state-reported electricity distribution against satellite-derived nighttime light can disclose whether public resources are equitably distributed across regions or channelled to politically compliant populations, providing an objective measure of horizontal equity. High-resolution imagery can quantify detention facility construction, the massing of security forces, and the physical destruction of settlements, providing verifiable repression indicators. Paci and Sayinzoga (2024) from the National Democratic Institute note that satellite technology is increasingly being deployed to monitor democratic infrastructure and human rights, pointing to examples such as the UNDP mapping potential polling stations in Vanuatu, UN agencies tracking humanitarian movements, and the use of satellite imagery to verify voter registries.

Mobile telecom metadata, credit card transaction volumes, and capital flight indicators offer a parallel set of economic and behavioural proxies. In fragile or low-capacity states, algorithmic analysis of mobile phone metadata is a highly effective proxy for economic vulnerability. For example, machine learning models analyzing mobile phone logs in Afghanistan were able to identify „ultra-poor“ households nearly as accurately as costly survey-based measures of consumption and wealth (Aiken et al. 2023). The deployment of central bank digital currencies (CBDCs) in authoritarian states introduces a new frontier for state control. Because CBDCs establish a link between central banks and individuals, they remove financial intermediaries, allowing governments to immediately monitor user data and penalize dissent by constricting or cutting off a user’s purchases (Weber 2025).

The case for integrating these alternative data streams is strongest for dimensions of horizontal equity, subnational resource distribution, and state violence, all of which are underdetermined by expert surveys and obscured in restricted information environments. The combination of news-based event coding for open societies with satellite and financial telemetry for closed ones does not resolve the anchoring problem, but it prevents the measurement apparatus from going blind at the frontier where it matters most.

6. Theoretical and Empirical Implications

Weekly or monthly observations across subnational units would change what causal inquiries are tractable in comparative research. Annual cross-national regressions have long suffered from temporal aggregation bias. When the explanatory variable and the outcome are measured once a year, researchers cannot establish whether a given indicator declined before or after the policy event they claim caused it. High-frequency subnational panel data make interrupted time series designs and event study methods genuinely applicable to autocratization research. For instance, scholars could measure the immediate chilling effect of an anti-protest law on civic mobilisation in the weeks following its passage, controlling for pre-event trends within the same unit without having to rely on cross-national variation in annual protest rates. The ability to estimate such micro-level treatment effects represents a methodological gain for a field that has spent two decades debating the endogeneity of its core relationships.

At a theoretical level, highly granular temporal data facilitates a reconceptualization of democratic erosion as a dynamic process. Prominent theoretical frameworks, ranging from Levitsky and Ziblatt’s (2018) three-phase sequence of capturing referees, sidelining opponents, and rewriting rules, to Bermeo’s (2016) typology of backsliding, are derived primarily from qualitative case studies. Frequently collected multinational data would enable researchers to systematically test whether these trajectories are uniform, highly variable, or path dependent across different regime types. For instance, does judicial capture predictably precede media subjugation, or does this sequence vary based on a state’s institutional legacy? Furthermore, does the pace of executive aggrandisement correlate with the resilience of subnational opposition? Resolving these questions regarding causal mechanisms and temporal sequencing requires a degree of precision that annual, aggregated data inherently lacks.

One must also take seriously the distinction Pierson (2004) draws between fast-moving and slow-moving political processes. Democratic norms, or what Levitsky and Ziblatt (2018) call the guardrails of democracy, change over years or decades, often without generating observable events. There is a genuine risk that high-frequency scores will conflate two distinct phenomena: the acute institutional stress of a political crisis, which causes large score movements and partially recovers, and the slower erosion of democratic foundations, which may be invisible in weekly data because no single week produces a detectable signal. Measurement approaches should explicitly model these dynamics as distinct layers, using high-frequency event data to account for punctuated shocks and periodic expert-coded assessments to track the structural baseline. Collapsing both into a single composite score is methodologically inadequate.

7. Policy Implications

Traditional annual democracy indices are inherently retrospective. Freedom House’s Freedom in the World report for a given year is published the following spring, by which point the institutional damage it records has already been sustained. Higher-frequency monitoring changes the temporal relationship between measurement and response. The Machine Learning for Peace project’s six-month-ahead forecasting of State Department travel advisories shows that event-based monitoring at sufficient frequency can generate predictive signals with operational value (Moratz et al. 2025). International IDEA’s Democracy Tracker, which ingests content from Nexis Newsdesk and the GDELT project, publishes monthly analyses of democracy-related events invaluable for policymakers, though it stops short of producing quantitative scores (International IDEA 2022).

The policy infrastructure for using such analyses already exists in partial form. Since its inception in 2004, the Millennium Challenge Corporation has invested approximately $17 billion in development compacts. Crucially, a country’s eligibility for these funds operates on a strict scorecard system where the „Ruling Justly“ category relies directly on Freedom House’s Political Rights and Civil Liberties indices. Passing the democratic rights hurdle is a mandatory prerequisite for funding (Millennium Challenge Corporation 2024). The EU’s Conditionality Regulation (Regulation 2020/2092) was formally utilized for the first time in December 2022, when the European Council voted to suspend billions of euros in cohesion funds to Hungary. The suspension was explicitly triggered by systemic rule-of-law and governance deficiencies that threatened the EU budget. However, preventive action is constrained by insufficient temporal resolution. Early warning systems in food security and climate adaptation are a positive case in point. Actionable, medium-term forecasts are viable, but only when measurement frequencies are calibrated to the temporal structure of the underlying process (World Meteorological Organization 2023). Democracy monitoring has not yet achieved this capability at global scale, and the gap is increasingly consequential as the velocity of institutional erosion accelerates.

Conclusion

The disjuncture between the trajectories of democratic erosion and the tools scholars use to measure them reflects a misalignment between empirical reality and methodological design. Despite their sophistication, annual expert surveys inherently obscure phenomena that accelerate over weeks and vary significantly across subnational jurisdictions within nominally unitary polities. The research agenda articulated here builds upon expert-coded indices by championing a complementary toolkit of instruments. These meticulously calibrated tools ensure our temporal and spatial resolution matches the velocity and granularity of the processes we seek to understand.

The RealTime DemTrends project shows that weekly, multi-dimensional democracy monitoring is technically feasible in a data-rich, open-media setting. Extending this cross-nationally and subnationally requires resolving four obstacles in combination: the multilingual bottleneck, cross-national anchoring, the media capture paradox in closed information ecosystems, and the distinction between acute political crisis and slow-moving structural erosion. These are problems that the methodological tools now available, from Codebook-LLM pipelines and domain-adapted multilingual transformers to satellite imagery and telecom telemetry, are better positioned to address than anything the field has previously had access to. Whether they prove adequate in practice is a question requiring sustained and iteratively validated research. That is precisely the kind of agenda the field should be pursuing.

References

Aiken, E. L., Bedoya, G., Blumenstock, J. E., & Coville, A. (2023). „Program targeting with machine learning and mobile phone data: Evidence from an anti-poverty intervention in Afghanistan.“ Journal of Development Economics, 161, 103016.

Benoit, Kenneth, Kenneth De Marchi, Conor Laver, Michael Laver, and Jinshuai Ma. 2026. “Using Large Language Models to Analyze Political Texts through Natural Language Understanding.” American Journal of Political Science00: 1–17.

Bermeo, Nancy. 2016. „On Democratic Backsliding.“ Journal of Democracy 27 (1): 5-19.

Carey, John M., Gretchen Helmke, Brendan Nyhan, and Susan Stokes. 2019. „Searching for Bright Lines in the Trump Presidency.“ Perspectives on Politics 17 (3): 699-718.

Carnegie Endowment for International Peace. 2025. Speaking in Code: Contextualizing Large Language Models in Southeast Asia. Washington, DC: Carnegie Endowment for International Peace.

Coppedge, Michael, John Gerring, Carl Henrik Knutsen, Staffan I. Lindberg, Jan Teorell, et al. 2020. „V-Dem Codebook v10.“ Gothenburg: Varieties of Democracy Project.

Freedom House. 2025. Freedom on the Net 2025: An Uncertain Future for the Global Internet. Washington, DC: Freedom House.

Gervasoni, Carlos. 2010. „A Rentier Theory of Subnational Regimes: Fiscal Federalism, Democracy, and Authoritarianism in the Argentine Provinces.“ World Politics 62 (2): 302-340.

Gibson, Edward. 2005. „Boundary Control: Subnational Authoritarianism in Democratic Countries.“ World Politics 58 (1): 101-132.

Giraudy, Agustina. 2010. „The Politics of Subnational Undemocratic Regime Reproduction in Argentina and Mexico.“ Journal of Politics in Latin America 2 (2): 53-84.

Grumbach, Jacob M. 2022. „Laboratories of Democratic Backsliding.“ American Political Science Review 117 (3): 967-984.

Grumbach, Jacob M., and Francesca Bitton. 2024. State Democracy Index 2.0 Report. Berkeley: Democracy PolicyLab, University of California.

Halterman, Andrew, and Katherine A. Keith. 2025. “Codebook LLMs: Evaluating LLMs as Measurement Tools for Political Science Concepts.” Political Analysis, 1–17.

Henderson, J. Vernon, Adam Storeygard, and David N. Weil. 2012. „Measuring Economic Growth from Outer Space.“ American Economic Review 102 (2): 994-1028.

Heseltine, Michael, and Bernhard Clemm von Hohenberg. 2024. „Large Language Models as a Substitute for Human Experts in Annotating Political Text.“ Research and Politics 11 (2).

Hong, Dengxi, Zexin Fu, Xin Zhang, and Yan Pan. 2025. „Research on the Development and Application of the GDELT Event Database.“ Data 10 (10): 158.

Hu, Yibo, MohammadSaleh Hosseini, Erick Skorupa Parolin, Javier Osorio, Latifur Khan, Patrick Brandt, and Vito D’Orazio. 2022. „ConfliBERT: A Pre-Trained Language Model for Political Conflict and Violence.“ Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Seattle.

International IDEA. 2022. Democracy Tracker Methodology and User Guide. Stockholm: International IDEA.

Levitsky, Steven, and Daniel Ziblatt. 2018. How Democracies Die. New York: Crown.

Liang, Y., Jabr, K., Grant, C., Irvine, J., and Halterman, A. 2018. „New techniques for coding political events across languages.“ In 2018 IEEE International Conference on Information Reuse and Integration (IRI) (pp. 88-93). IEEE.

Little, Andrew T., and Anne Meng. 2024. „Measuring Democratic Backsliding.“ PS: Political Science and Politics 57 (2): 149-163.

Lührmann, Anna, and Staffan I. Lindberg. 2019. „A Third Wave of Autocratization Is Here: What Is New About It?“ Democratization 26 (7): 1095-1113.

Millennium Challenge Corporation. 2024. „Guide to the MCC Indicators for Fiscal Year 2024.“ Washington, DC: MCC.

Moratz, Donald A., Jeremy Springman, Erik Wibbels, Serkant Adiguzel, Mateo Villamizar-Chaparro, Zung-Ru Lin, Diego Romero, Mahda Soltani, Hanling Su, and Jitender Swami. 2025. “Tracking Civic Space in Developing Countries with a High-Quality Corpus of Domestic Media and Transformer Models.” Working paper, August 21.

Mundt, Marcia. 2025. „How Can We Rapidly Assess What Is Happening in the US Democracy?“ RealTime DemTrends (Substack), June 2.

Mundt, Marcia. 2026. „Democracy Trends Weekly Analysis: March 9-15, 2026.“ RealTime DemTrends (Substack), March 19.

Paci, Tristan, and Maurice Sayinzoga. 2024. „Space, Satellites, and Democracy: Implications of the New Space Age for Democratic Processes and Recommendations for Action.“ Washington, DC: National Democratic Institute.

Pemstein, Daniel, Kyle L. Marquardt, Eitan Tzelgov, Yi-ting Wang, Juraj Medzihorsky, Joshua Krusell, Farhad Miri, and Johannes von Römer. 2025. „The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data.“ V-Dem Working Paper 21. Gothenburg.

Pérez Sandoval, Javier. 2023. „Measuring and Assessing Subnational Electoral Democracy: A New Dataset for the Americas and India.“ Democratization 30 (6): 1031-1050.

Pérez Sandoval, Javier. 2025. „Multilevel Regime Decoupling: The Territorial Dimension of Autocratization and Contemporary Regime Change.“ Perspectives on Politics (online first).

Pierson, Paul. 2004. Politics in Time: History, Institutions, and Social Analysis. Princeton: Princeton University Press.

Sato, Tetsuro, Martin Lundstedt, Kelly Morrison, Vanessa A. Boese, and Staffan I. Lindberg. 2022. „Institutional Order in Episodes of Autocratization.“ V-Dem Working Paper 133. Gothenburg.

Skaaning, Svend-Erik. 2018. „Different Types of Data and the Validity of Democracy Measures.“ Politics and Governance 6 (1): 105-116.

Varol, Ozan. 2015. „Stealth Authoritarianism.“ Iowa Law Review 100 (4): 1673-1742.

Weber, Valentin. 2025. „Data-Centric Authoritarianism: How China’s Development of Frontier Technologies Could Globalize Repression.“ Washington, DC: National Endowment for Democracy.

Weidmann, Nils B. 2024. „Recent Events and the Coding of Cross-National Indicators.“ Comparative Political Studies57 (3): 382-410.

Weidmann, Nils B., Mats Faulborn, and David García. 2025. “Large Language Models Are Democracy Coders with Attitudes.” PS: Political Science & Politics 59: 17–23.

World Meteorological Organization. 2023. „Early Warning Systems Reach New Heights, but Critical Gaps Jeopardise Global Progress.“ Geneva: WMO.

Posted in

Kommentar verfassen

Entdecke mehr von Heidelberg Review

Jetzt abonnieren, um weiterzulesen und auf das gesamte Archiv zuzugreifen.

Weiterlesen