As AI systems become embedded in organizational processes, the question is how to govern them responsibly. OrgLab's work in this area examines how organizations navigate the tension between innovation speed and regulatory compliance, how technology bans and policy interventions reshape innovation ecosystems, and how public institutions can act as active co-creators of responsible AI governance rather than passive regulators. Drawing on fieldwork across public administration, healthcare surveillance, and cultural and creative industries, we develop frameworks that connect EU-level policy (including the AI Act and Horizon Europe priorities) with the operational reality of organizations implementing AI.
Abstract. When public authorities temporarily restrict access to a technology on privacy or security grounds, they simultaneously remove access and broadcast a regulatory signal about its risks and legitimacy. This study examines how this signal alters individuals' engagement with the technology once access is restored. We exploit Italy's 2023 ban on ChatGPT as a natural experiment. The ban, the first instance of a Western democracy restricting access to a general-purpose AI technology, is examined through a difference-in-differences design comparing Italy's post-ban usage trajectory with European comparators. The results show that Italian usage returned to its counterfactual trajectory once access was restored. Neither the lasting usage deficit predicted by risk amplification and technological stigma theories nor the usage surge predicted by psychological reactance theory materialized. Instead, the finding is consistent with signal attenuation. The study makes three contributions. First, it provides direct empirical evidence that a sovereign restriction on a general-purpose AI technology can leave no detectable trace on post-ban usage. Second, it demonstrates that the same instrument that successfully compelled provider compliance had no detectable effect on demand-side behavior, indicating that regulatory reach over providers and regulatory influence over users may operate through distinct channels that require distinct instruments. Third, it derives boundary conditions that may help specify when regulatory signals reach individual behavior, offering an agenda for future investigation.
Reference. Varone, A. and F. Bolici. (2026). Banned, Restored, Resumed: How Does Banning AI Technologies Alter Their Post-Ban Usage? Evidence from Italy’s 2023 ChatGPT Ban. Working Paper.
Abstract. This paper proposes a new framework for the governance of emerging AI ecosystems, reimagining public institutions as active co-designer, coordinators and promoters of technology transfer rather than mere regulators and supervisors. By providing this ecosystemic perspective, this work supports a cohesive and responsible path for AI development and diffusion..
Reference. Varone, A. et al. (2025) “What role(s) for Public Institutions in Emerging AI Ecosystems? Co-Designers, Coordinators and Promoters of Technology Transfer,” ProspettiveinOrganizzazione [Preprint].
Abstract. AI-powered systems are expected to profoundly reshape the healthcare sector. However, this technology is yet to consolidate, and guidelines promoting its responsible use are scattered. Thus, a considerable degree of chaoticity exists in evaluating the appropriateness of its applications. This literature review aims to map the key opportunities and risks associated with the introduction of the AI-powered tool ChatGPT in healthcare research, education and clinical practice, in order to determine which uses are considered appropriate, which are controversial, and which should be avoided. Our findings suggest that ChatGPT is a valuable tool for healthcare research. While its use as a co-author is considered unethical, it remains a legitimate tool for editing and proofreading. In healthcare education, LLMs like ChatGPT are expected to become increasingly influential in the future. This calls for a reassessment of student roles and a redesign of educational strategies to align with current technological affordances. ChatGPT is not a medical tool; its application to clinical practice should be avoided. Developing appropriate regulatory frameworks is necessary to exploit the transformative power of AI while preserving ethical and clinical standards.
Reference. Bolici, Varone and Diana (2024). ChatGPT-generated Tensions in Healthcare: A Literature Review Map for Responsible Use. In Proceedings of OBHC Conference 2024. Oslo, Norway.
Abstract. The relationship between innovation diffusion and privacy is often controversial. On March 31th 2023, the Italian Data Protection Authority (GPDP) determined the temporary suspension of ChatGPT in Italy on account of illicit data collection and processing practices. This study uses the Twitter debate as a proxy to examine how the Italian landscape responded to the suspension of ChatGPT, and its perception on the innovation-privacy trade-off. Our findings reveal that the intervention of the GPDP sparked an active and socially engaged reaction. Both popular and less popular users played significant roles in the flow of information, with popular users being more central in terms of connections and less popular users demonstrating stronger brokering capabilities. The suspension of ChatGPT had an impact on the sentiment and content of the debate, with the majority of users expressing negative opinions in their contributions. However, contributions mentioning VPN and alternatives to ChatGPT generally showed a more positive sentiment compared to those emphasizing privacy. This suggests that users’ negative sentiment was mainly connected with the inaccessibility of ChatGPT rather than to privacy concerns. Consequently, the Italian landscape viewed the GPDP intervention more as a restriction rather than a protective measure, prioritizing innovation diffusion over privacy considerations.
Reference. Bolici, F., Varone, A. and Diana, G. (2024) “To Ban, or Not to Ban, this Is the D(AI)lemma: An Analysis of Ecosystem Landscapes,” in A.M. Braccini, F. Ricciardi, and F. Virili (eds.) Digital (Eco) Systems and Societal Challenges: New Scenarios for Organizing. Cham: Springer Nature Switzerland, pp. 335–353. Available at: https://doi.org/10.1007/978-3-031-75586-6_18.
Abstract. The rapid diffusion of disruptive technologies is generating a revolutionary and tangible impact over individuals, organizations and society. However, this rapid pace of development is not matched by up-to-date regulations, which makes the relationship between institutional policies and technological advancements complex and controversial. Taking as a reference generative AI, this work studies how individuals respond to public interventions banning disruptive technologies, exploring the arguments and sentiment they express towards it. By analysing approximately 15,000 X contributions on the suspension of ChatGPT in Italy, our work provide evidence that banning disruptive technologies is likely ineffective and unpopular. This was highlighted by the strong prevalence of individuals expressing a negative perception on the ban, by the presence of users actively and collaboratively searching solutions to bypass it, and a perceived institutional backwardness in terms of technology development.
Reference. Bolici, F., Varone, A. and Diana, G. (2024) “Unpopular Policies, Ineffective Bans: Lessons Learned from ChatGPT Prohibition in Italy,” ECIS 2024 Proceedings [Preprint]. Available at: https://aisel.aisnet.org/ecis2024/track04_impactai/track04_impactai/11.