Artificial Intelligence &
the Future of Work

OrgLab studies the evolving role of Artificial Intelligence in reshaping workforce and organizational processes. Through interdisciplinary studies - from qualitative fieldwork to quantitative modeling - we investigate how AI use transforms job profiles, skill requirements, and organizational structures. By critically examining both the benefits and unintended consequences, our research provides a nuanced foundation for evidence-based strategies in the rapidly changing landscape of work.

Our Research on Artificial Intelligence & the Future of Work

Skill Development and Retention When Working with AI Tools

Abstract. We investigate the effects of the use of AI tools on skill development, retention and loss, examining whether AI use leads to reduced or increased opportunity and need for skill development. Programming is used as an example domain because of the increasing impact of AI tools in this domain. We identify new skills needed to effectively utilize AI tools, namely prompting and output evaluations. We then develop a model of the interplay of task complexity, AI capabilities and individual expertise, predicting the conditions under which skill development occurs. Finally, we create a set of hypotheses about the impacts of AI tool use for novices and expert programmers and propose research methods to test them.

Reference. Crowston and Bolici. 2025. Deskilling and upskilling with generative AI systems. In Proceedings of the iConference, 2025. Indiana Bloomington, USA.

ChatGPT-generated Tensions in Healthcare: A Literature Review Map for Responsible Use

Abstract. AI-powered systems are expected to profoundly reshape the healthcare sector. However, this technology is yet to consolidate, and guidelines promoting its responsible use are scattered. Thus, a considerable degree of chaoticity exists in evaluating the appropriateness of its applications. This literature review aims to map the key opportunities and risks associated with the introduction of the AI-powered tool ChatGPT in healthcare research, education and clinical practice, in order to determine which uses are considered appropriate, which are controversial, and which should be avoided. Our findings suggest that ChatGPT is a valuable tool for healthcare research. While its use as a co-author is considered unethical, it remains a legitimate tool for editing and proofreading. In healthcare education, LLMs like ChatGPT are expected to become increasingly influential in the future. This calls for a reassessment of student roles and a redesign of educational strategies to align with current technological affordances. ChatGPT is not a medical tool; its application to clinical practice should be avoided. Developing appropriate regulatory frameworks is necessary to exploit the transformative power of AI while preserving ethical and clinical standards.

Reference. Bolici, Varone and Diana (2024). ChatGPT-generated Tensions in Healthcare: A Literature Review Map for Responsible Use. In Proceedings of OBHC Conference 2025. Oslo, Norway.

Unpopular Policies, Ineffective Bans: Lesson Learned From ChatGPT Prohibition in Italy

Abstract. The rapid diffusion of disruptive technologies is generating a revolutionary and tangible impact over individuals, organizations and society. However, this rapid pace of development is not matched by up- to-date regulations, which makes the relationship between institutional policies and technologicaladvancements complex and controversial. Taking as a reference generative AI, this work studies how individuals respond to public interventions banning disruptive technologies, exploring the arguments and sentiment they express towards it. By analysing approximately 15,000 Twitter contributions on the suspension of ChatGPT in Italy, our work provide evidence that banning disruptive technologies is likely ineffective and unpopular. This was highlighted by the strong prevalence of individuals expressing a negative perception on the ban, by the presence of users actively and collaboratively searching solutions to bypass it, and a perceived institutional backwardness in terms of technology development.

Reference. Bolici, Varone and Diana (2024). Unpopular Policies, Ineffective Bans: Lesson Learned From ChatGPT Prohibition in Italy. In Proceedings of ECIS 2024. Paphos, Cyprus.

Impacts of Machine Learning on Work

Abstract. The increased pervasiveness of technological advancements in automation makes it urgent to address the question of how work is changing in response. Focusing on applications of machine learning (ML) that automate information tasks, we present a simple framework for identifying the impacts of an automated system on a task. From an analysis of popular press articles about ML, we develop 3 patterns for the use of ML—decision support, blended decision making and complete automation—with implications for the kinds of tasks and systems. We further consider how automation of one task might have implications for other interdependent tasks. Our main conclusion is that designers have a range of options for systems and that automation of tasks is not the same as automation of work.

Reference. Crowston and Bolici (2019). Impacts of Machine Learning on Work. In Proceedings of the 52nd Hawaii International Conference on System Sciences, 2019.