Chrisanthi Avgerou, Professor of IS, Dept. of Management, London School of Economics and Political Science.

Lessons about mid-range theory from the making of the paper « Information technology and government corruption in developing countries: evidence from Ghana customs. » The talk will refer to the following paper: Addo, A. and Avgerou, C. ‘Information technology and government corruption in developing countries: Evidence from Ghana customs’ MIS Quarterly, forthcoming. Abstract: The literature on information technology (IT) and government corruption in developing countries indicates contradictory evidence about the realization of anti-corruption effects. So far, there is no theoretical explanation of why the anti-corruption potential of IT demonstrated in some countries is not realized in many other countries. Drawing evidence from a case study of information systems interventions at Ghana customs over 35 years, we investigate how and why IT’s anti-corruption potential may be curtailed in the context of developing countries’ governments and societies. We focus on IT-mediated petty corruption practices of street-level officers, which we consider to be socially embedded and institutionally conditioned phenomena. We find that conditions of possibility for the IT-mediated petty corruption practices are created during the implementation of information systems. The configuration of IT and organizational processes of a government agency are constrained by the broader government administration system and influenced by the vested interests of government officers, politicians, and businesses. Subsequently, the co-optation of IT for petty corruption practices is enabled by networks of relationships and institutions of patronage that extend across government, business, and society. We thus explain the often limited effects of IT on petty corruption as the inability of localized information systems implementations to change modes of government administration that are embedded in the enduring neopatrimonial institutions and politics of many developing countries.

Aron Lindbergh, Assistant Professor of IS, Stevens Institute of Technology.

Using Traditional vs. Autonomous Design Tools: Design Problems and Performance Differentials (by Aron Lindberg, Stefan Seidel, Corinne Coen, and Michael Gau). Autonomous design tools are increasingly used across different design fields, including semiconductor chip design, video game design, and generative design of architecture and engineering products. These tools differ from traditional design tools—-such as for drawing or computer-aided design—-in that they carry out design tasks and make decisions on behalf of the designer in a largely independent fashion. Using algorithms, ranging from applying heuristics to machine learning, these tools are difficult to understand and are often used with low degrees of intermittent human intervention. Hence, they are often used in an experimental fashion, allowing designers to produce multiple designs at high speeds, which can then be evaluated post hoc. To understand the key characteristics of autonomous tools and how these are related to their effectiveness in different design situations, we conducted a series of agent-based modeling simulations. Our results indicate that designers using autonomous design tools are more effective in simple design situations (situations with fewer possible solutions), while human designers using traditional tools are more effective in complex design situations where a large number of possible solutions exists. We suggest that performance differences can be explained by variables pertaining to both the fitness landscape and the properties of the design system (i.e., designers using traditional or autonomous tools) in terms of range of vision and number of restarts. Our findings hold implications for when traditional or autonomous tools are best used, and how they can be combined across various design processes.

Eric Walden, Professor of IS and Decision Sciences, Texas Tech Rawls College of Business, and Director of the Texas Tech Neuroimaging Institute.

Delays In Information Presentation Lead To Brain State Switching, Which Slows User Response Time. System delays are a major factor that harms user experience. Long delays often result in system abandonment, decreased user performance, and lost revenue for businesses. Although studies have provided important contributions on consequences of delays, less is known about why system delays harm the user experience. Using fMRI, we examined how long system delays—compared to short delays—can change a user’s brain state. Results showed that brain state switching was more likely during a long delay than during a short delay. Brain state switching was also more likely at the beginning of a task following a long delay than following a short delay. The default-mode network was more active during long delays than when users were engaged in the task. Furthermore, long delays were significantly related to increased decision time in the task following a delay. This effect was mediated by brain state switching at the beginning of the task after the delay. Additionally, fMRI results suggested that the task became more effortful after long delays than after short delays, as evidenced by increased brain activation. Moreover, this brain activation mirrored the activation in people experiencing pain.

Natalia Levina, Professor at the New York University Stern School of Business, and Director of the Fubon Center for Technology, Business and Innovation.

To incorporate or not to incorporate AI for critical judgments: How professionals deal with opacity using AI for medical diagnosis. Artificial intelligence (AI) technologies are promising to transform how professionals are conducting knowledge work, yet the opacity of AI tools is of growing concern, as it is difficult to understand or explain the results they produce. Organizational researchers are only starting to understand whether and how this transformation unfolds in practice. We conducted an in-depth field study in a major US hospital where AI tools were being used within three different radiology departments for forming critical judgments: breast cancer, lung cancer, and bone age. In all three departments, professionals experienced a surge in uncertainty due to the opacity of the AI tools’ results, which often conflicted with their initial diagnosis, yet provided no insight into its underlying reasoning or logic. We found that how professionals dealt with this opacity and its impact on their overall uncertainty were critical to whether and how they incorporated the AI results. Only in one department (of the three we studied), did professionals meaningfully and consistently incorporate AI results into their final judgments. This study reveals how only in this department did the AI tool’s results directly relate to professionals’ locus of uncertainty and led to developing rich interrogation practices of the opaque AI results; this way, using and incorporating the AI results reduced the overall uncertainty of forming their final judgments. Our study unpacks the challenges involved in “augmenting” professional judgment with powerful, yet opaque, technologies and contributes to literatures on opacity in AI, the adoption of new technologies, and the production of knowledge.

Conférenciers invités : Hillol Bala (Indiana Univ.), Akshat Lakhiwal (Indiana Univ.) et Pierre-Majorique Léger (HEC Montréal)

Love Me or Love Me Not: Behavioral and Neurophysiological Assessment of Ambivalence to Information. The proliferation of digital platforms has made information widely available to individuals who rely on it to make day‐today decisions. This paper focuses on why and how information that is presented on digital platforms and its associated valence (e.g., positivity and negativity) may cumulatively elicit mixed feelings among individuals and influence their decisions. It is theorized that as individuals process information, they could experience coexisting positive and negative dispositions (i.e., ambivalence), which ultimately influences their attention and elicits distinct behavioral outcomes. Yet, summarization of these feelings using existing visual representations often results in a simplified positive‐negative distinction, where more nuanced feelings and attitudes such as ambivalence and indifference are practically indistinguishable. Four randomized controlled experiments, including an electroencephalography (EEG) study are conducted to examine how ambivalence elicited due to information presented in various ways on digital platforms may draw varying degrees of attention and influence decisions. The ability of current information representation structures on digital platforms in capturing or representing mixed feelings (such as ambivalence) is examined and compared to a bivariate intervention that correctly elicits attitudes like ambivalence. Results not only emphasize that mixed feelings such as ambivalence elicit distinct behavioral and neurophysiological outcomes, but also that the inability of digital platforms to accurately recognize, interpret, and present these outcomes could potentially limit individuals’ ability to make fully informed decisions.

Présentation virtuelle d’une « réflexion scientifique en cours ». Conférenciers : Guy Paré, Membre du GReSI, Titulaire de la Chaire de recherche en santé connectée (Dép. TI) et Directeur du programme de doctorat à HEC, & Gerit Wagner, Chercheur postdoctoral de la Chaire de recherche en santé connectée (Dép. TI)

Theory elaboration in information systems: opportunities, tactics and guidelines. Abstract: In this paper, we explain the notion of theory elaboration arguing that this mode of theorizing is essential to improve the explanatory power of existing information systems (IS) theories. This way of advancing knowledge has arguably received limited attention in debates on theorizing despite its substantial promises for IS research. Complementing approaches of theory generation and theory testing, elaboration offers a range of tactics for within-theory improvement aimed at cumulative progression of explanatory power, and, ultimately, a stronger core of theories in a given discipline. We believe theory elaboration offers truly promising opportunities for knowledge development in our field. Our work, presented as a commentary paper, is intended to clarify the process of theory elaboration, to expose researchers and students to the different elaboration tactics, and to provide a set of guidelines for prospective authors of theory elaboration papers.

Likoebe Maruping, Professor of Computer IS and Member of the Centre for Digital Innovation (CDI),  J. Mack Robinson College of Business, Georgia State University.

Open Source Collaboration in New Ventures. Open source collaboration (OSC) platforms, such as GitHub, have emerged over the past decade to become a salient venue for organizing innovation efforts and output. As a result, an increasing number of entrepreneurial firms are collaborating with open communities on such platforms to develop and scale their new ventures. From the lens of open innovation, we examine value creation and value capture in high-tech startups’ external collaboration on OSC platforms. We develop a theoretical framework to explicate how the engagement in OSC may affect the value of startup firms and how the effect is contingent on the stage of venture maturity (conception, commercialization, or growth) and the mode of OSC engagement (inbound or outbound). In analyses that pool 22,896 matched startups with monthly panel observations between 2008 and 2017, we find a positive and significant value-added effect of OSC to startups, but the effect is sensitive to the stage of startup maturity and the mode of OSC engagement. In particular, startups in the conception and commercialization stages benefit more from inbound OSC whereas startups in the growth stage benefit more from outbound OSC. As startups increasingly rely on OSC platforms for organizing innovation, our contribution is to show whether, when, and how knowledge flows through startups’ OSC might affect the value of startup firms