OrgLab studies the evolving role of Artificial Intelligence in reshaping work and organizational processes. Through interdisciplinary studies, from qualitative fieldwork to quantitative modeling, we investigate how AI use transforms job profiles, skill requirements, and organizational structures. By critically examining both the benefits and unintended consequences, our research provides a nuanced foundation for evidence-based strategies in the rapidly changing landscape of work.
Abstract. This paper examines how AI systems transform the conditions under which skills develop by reshaping the sensemaking processes workers use to interpret, verify, and learn from their actions. Adopting a conceptual, theory-building approach within a socio-technical systems (STS) framework, we theorize how AI risks shifting work from “learning by doing” to “doing without learning” unless verification capabilities are explicitly cultivated. We develop a generalizable model of human-AI interaction that identifies three distinct skill trajectories – novice compression, intermediate drift, and expert expansion – driven by how AI alters users’ sensemaking commitments. The central mechanism determining these paths is not AI proficiency, but verification intent. While novices tend to over-rely on AI and shrink their learning band, and intermediates oscillate between acceleration and stagnation, experts use verification episodes as catalysts for deeper representational refinement. Consequently, we position verification not merely as a technical competence, but as the socio-cognitive anchor of sustainable learning. The analysis suggests that organizations must intentionally design AI-augmented workflows that preserve cognitive friction and prioritize verification skills, such as diagnosis and counterfactual checking, to ensure sustainable expertise development.
Reference: Bolici, F., Varone, A. and Crowston, K. (2026) “From Learning-by-Doing to Doing-Without-Learning: A Sensemaking Model of Skill Development in AI-Augmented Work,” Atti del XLI Convegno Nazionale AIDEA. XLI Convegno Nazionale AIDEA, Milano.
Abstract. What does it mean to be skilled in a world where machines can now write computer code? We explore how generative AI is not only accelerating productivity, but reshaping the very meaning of programming expertise. Adopting a relational perspective, we focus on three interdependent skills that define effective human–AI collaboration: task framing, prompt design, and output interpretation. Drawing on research in programming skills development and human–AI interaction, we trace the emergence of hybrid forms of competence that blend technical reasoning with contextual judgment, skills like strategic prompting, critical debugging, and situated problem framing. These signal a broader shift in programming: from producing code to coordinating AI-assisted problem solving, requiring new forms of cognitive effort and evaluative thinking. As AI becomes an active collaborator, the focus is moving away from writing code line-by-line toward orchestrating adaptive systems. This transformation has deep implications for how technical skills are learned, applied, and socially valued in AI-mediated environments.
Reference: Bolici, F. et al. (2026) Rethinking Programming Skills in the Age of Generative AI. Available at: https://doi.org/10.24251/HICSS.2026.863.
Abstract. A growing concern for the field of computer-human interaction is human interaction with artificial intelligence (AI). A concern about the use of AI tools is that automating tasks reduces the opportunity to learn how to do them. We explore antecedents to this outcome in the context of AI tools for programming in an introductory class. We hypothesize that if students are intrinsically motivated to learn to program, they may avoid using tools in order to engage with the material, while students with extrinsic motivations may use the tools to get work done. Counter to our expectations, analysis of a survey of learning motivations and log data of AI tool use suggests that students with higher extrinsic motivation actually use the AI tool less, while those with higher intrinsic motivation use it more, and that there is no correlation between the level of AI use and grades. These findings suggest that to understand the impact on skills, it is necessary to examine how AI is used in detail, not just the overall level of use.
Reference: Varone, A. et al. (2026) “Motivations for Using AI Tools in an Introductory Programming Class,” Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems. CHI Conference on Human Factors in Computing Systems.
Abstract. Deskilling is a long-standing prediction of the use of information technology, raised anew by the increased capabilities of AI (AI) systems. A review of studies of AI applications suggests that deskilling (or leveling of ability) is a common outcome, but systems can also require new skills, i.e., upskilling. To identify which settings are more likely to yield deskilling vs. upskilling, we propose a model of a human interacting with an AI system for a task. The model highlights the possibility for a worker to develop and exhibit (or not) skills in prompting for, and evaluation and editing of system output, thus yielding upskilling or deskilling. We illustrate these model-predicted effects on work with examples of current studies of AI-based systems. We discuss organizational implications of systems that deskill or upskill workers and suggest future research directions.
Reference: Crowston, K. and Bolici, F. (2025) “Deskilling and upskilling with AI systems,” Information Research an international electronic journal, 30(iConf), pp. 1009–1023. Available at: https://doi.org/10.47989/ir30iConf47143.
Abstract. Integrating Artificial Intelligence (AI) into organizations requires more than simply installing new technologies, but rather demands systematic alignment between AI capabilities and existing organizational structures, work practices, and human capabilities. This chapter introduces a decision-support framework to help leaders evaluate whether and how to implement AI, drawing on each organization’s unique structure, tasks, and goals. By linking organizational design theory with practical adoption strategies, this chapter shows how different task types—routine, engineering/craft, and non-routine—pair with three potential impacts of AI on organization dynamics: replace, reinforce, or reveal. Rather than prescribing a universal solution, it is emphasized that effective AI implementation depends on adopting the right approach for each context. In this way, it guides decision makers in choosing the most suitable AI solution and in anticipating corresponding changes in roles, processes, and culture. This clear, structured method helps organizations avoid misalignment, reduce costly experimentation, and unlock AI’s potential as a genuine driver of efficiency, innovation, and strategic value.
Reference: Varone, A., Bolici, F. and Crowston, K. (2025) “From Execution to Orchestration: Rethinking GenAI Implementation through Information Processing Theory,” ITAIS 2025 Proceedings. XXII Conference of the Italian Chapter of AIS.
Abstract. Artificial Intelligence (AI) is transforming the conditions under which expertise develops. This paper theorizes how AI reorganizes skill development, maintenance, and loss through three mechanisms. The Performance-Leveling Effect describes how AI compresses observable performance differences across different expertise levels by providing an external layer of capability that especially benefits novices. The Augmentation–Deskilling Paradox captures how short-term gains can displace the practice cycles through which declarative knowledge becomes procedural expertise. The AI Capability Loss denotes the unrealized potential that arises when users lack meta-skills (e.g. problem decomposition, prompting, evaluative judgment, contextual adaptation) needed to orchestrate AI effectively. Together, these mechanisms explain why AI can raise short-term productivity while gradually eroding the capabilities required for sustained expertise and point to implications for organizational learning and the design of AI-enabled work.
Reference: Varone, A., Crowston, K. and Bolici, F. (2025) “Generative AI and the evolution of skills: A conceptual model for skill development and retention,” EGOS 2025. 41st EGOS Colloquium.
Abstract. The intersection of Artificial Intelligence (AI) and Public Administration (PA) is a field of exponential growth, yet it is widely perceived as fragmented and lacking a coherent intellectual core. This study addresses this tension by providing a bibliometric-based thematic analysis of the research landscape. Analyzing a corpus of 820 documents, we employ performance analysis and keyword co-occurrence to map the field's evolution and thematic structure. Our findings reveal a "post-2018 explosion" of research, creating a two-speed intellectual economy where PA functions as a net importer of ideas from adjacent disciplines. We identify five distinct thematic clusters—from technical work on Explainable AI to normative debates on AI Governance—that mostly operate as functional silos. We argue that the central challenge is not a lack of research but a need for intellectual integration. This map provides a foundational tool for scholars to build conceptual bridges and foster a more cumulative body of knowledge.
Reference: Varone, A. and Bolici, F. (2025) “Mapping Thematic Ecosystems at the Intersection of Artificial Intelligence and Public Administration Research,” ITAIS 2025 Proceedings. XXII Conference of the Italian Chapter of AIS.
Abstract. Generative AI is bound to have profound effects over work and how it is organized. However, while existing research has focused on uncovering the impact of Generative AI systems on the execution of individual tasks, it has significantly under-emphasized the interdependent and collaborative nature of work. To support filling this gap, this paper investigates the implications of generative AI on collaborative work activities, emphasizing the need for a shift from a taskcentric approach to a broader process-oriented perspective. By utilizing the 3C collaboration model—communication, coordination, and cooperation—this study employs a bibliometric-based analysis to map the current state of research in this domain, identifying gaps and opportunities for field development. Our analysis identifies significant disparities in academic focus on Generative AI in collaborative settings, highlighting under-researched areas such as cooperation and coordination. Moreover, research in the domain of collaboration is remarkably segregated, with few studies addressing multiple collaboration dimensions simultaneously. Lastly, while there is a strong emphasis on Human-AI interactions, the role of AI in mediating Human-Human interactions is less explored. Addressing these gaps could provide valuable insights into defining strategies to effectively integrate generative AI systems within complex organizational settings.
Reference: Bolici, F., Varone, A. and Diana, G. (2024) “From a Task-Centered Approach to Interdependent Activities: Revealing Gaps in Generative AI Research on Coordination and Cooperation,” ITAIS 2024 Proceedings. Available at: https://aisel.aisnet.org/itais2024/11.
Abstract. The increased pervasiveness of technological advancements in automation makes it urgent to address the question of how work is changing in response. Focusing on applications of machine learning (ML) that automate information tasks, we present a simple framework for identifying the impacts of an automated system on a task. From an analysis of popular press articles about ML, we develop 3 patterns for the use of ML: decision support, blended decision making and complete automation. We further consider how automation of one task might have implications for other tasks. Our main conclusion is that designers have a range of options for systems and that automation of tasks is not the same as automation of work.
Reference: Crowston, K. and Bolici, F. (2019) “Impacts of Machine Learning on Work,” Proceedings of the 52nd Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences (HICSS).
Abstract. The increased pervasiveness of technological advancements in automation makes it urgent to address the question of how work is changing in response. Focusing on applications of machine learning (ML) that automate information tasks, we present a simple framework for identifying the impacts of an automated system on a task. From an analysis of popular press articles about ML, we develop 3 patterns for the use of ML: decision support, blended decision making and complete automation. We further consider how automation of one task might have implications for other tasks. Our main conclusion is that designers have a range of options for systems and that automation of tasks is not the same as automation of work.
Reference: Crowston, K. and Bolici, F. (2019) “Impacts of Machine Learning on Work,” Proceedings of the 52nd Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences (HICSS).