Megacorp: From Cyberdystopian Vision to Technoeconomic Reality

Megacorp (front cover)

ISBN 978-1-944373-30-6 • Defragmenter Media, 2019 • 322 pages

The image of the “megacorp” – the ruthless, sinister, high-tech global conglomerate that’s grown so large and powerful that it has acquired the characteristics of a sovereign state – is one of the iconic elements of cyberpunk fiction. Such a megacorp maintains its own army, creates its own laws and currency, grants citizenship to employees and customers, and governs vast swaths of cyberspace and the physical world. If it allows traditional governments to survive in some vestigial form, it’s only so they can handle those mundane tasks that the megacorp doesn’t want to deal with itself. By these standards, contemporary companies like Amazon, Apple, Facebook, Google, Microsoft, ExxonMobil, and Walmart aren’t (yet) “megacorps”; they’re the playthings that megacorps gobble up to use for spare parts.

This volume develops a comprehensive intellectual history of the megacorp. It locates forebears of the cyberpunk megacorp not only in earlier fictional works like Čapek’s R.U.R. (1921) and Von Harbou’s Metropolis (1925) but in a string of real-world organizations ranging from the 17th-Century British and Dutch East India Companies to the Pullman Palace Car Company, the Ford Motor Company, and late 20th-Century Japanese keiretsu and South Korean chaebol – as well as in the nearly indestructible oligopolistic “megacorp” described in the pioneering theory of American economist Alfred Eichner.

By investigating the nature of the cyberpunk megacorp as a political entity, commercial entity, producer and exploiter of futuristic technologies, and generator or manipulator of culture, differences are highlighted between the megacorps of “classical” cyberpunk and post-cyberpunk fiction. Classical cyberpunk megacorps – portrayed in novels like Gibson’s Sprawl trilogy, films like RoboCop and Johnny Mnemonic, and games like Cyberpunk, Cyberspace, and Syndicate – are often ostentatiously malevolent and obsessed with short-term financial profits to the exclusion of all else; the over-the-top depictions of such companies serve a dramatic purpose and are not offered by their authors as serious futurological studies. On the other hand, the more nuanced and philosophically rich portrayals of megacorps in post-cyberpunk works like Shirow’s manga The Ghost in the Shell reveal companies that are less overtly evil, possess a broader and more plausible range of long-term strategic goals, and coexist alongside conventional governments in a state of (begrudging) mutual respect. Yet other works like the game Shadowrun depict companies that combine elements of both classical cyberpunk and post-cyberpunk megacorps.

Drawing on such analyses, the volume concludes by exploring how the idea of the post-cyberpunk megacorp anticipated a new type of real-world megacorp – the unfathomably large, fast, and complex “rhizocorp” – that’s now being made possible through ongoing revolutions in the exploitation of robotics, AI, and the Internet of Things – and which threatens to become the dominant economic, political, and sociocultural power of our technologically posthumanized future world.

Read more

An Axiology of Information Security for Futuristic Neuroprostheses: Upholding Human Values in the Context of Technological Posthumanization

Frontiers in Neuroscience 11, 605 (2017); MNiSW 2016 List A: 30 points; 2017 Impact Factor: 3.566

ABSTRACT: Previous works exploring the challenges of ensuring information security for neuroprosthetic devices and their users have typically built on the traditional InfoSec concept of the “CIA Triad” of confidentiality, integrity, and availability. However, we argue that the CIA Triad provides an increasingly inadequate foundation for envisioning information security for neuroprostheses, insofar as it presumes that (1) any computational systems to be secured are merely instruments for expressing their human users’ agency, and (2) computing devices are conceptually and practically separable from their users. Drawing on contemporary philosophy of technology and philosophical and critical posthumanist analysis, we contend that futuristic neuroprostheses could conceivably violate these basic InfoSec presumptions, insofar as (1) they may alter or supplant their users’ biological agency rather than simply supporting it, and (2) they may structurally and functionally fuse with their users to create qualitatively novel “posthumanized” human-machine systems that cannot be secured as though they were conventional computing devices. Simultaneously, it is noted that many of the goals that have been proposed for future neuroprostheses by InfoSec researchers (e.g., relating to aesthetics, human dignity, authenticity, free will, and cultural sensitivity) fall outside the scope of InfoSec as it has historically been understood and touch on a wide range of ethical, aesthetic, physical, metaphysical, psychological, economic, and social values. We suggest that the field of axiology can provide useful frameworks for more effectively identifying, analyzing, and prioritizing such diverse types of values and goods that can (and should) be pursued through InfoSec practices for futuristic neuroprostheses.

Read more

The Handbook of Information Security for Advanced Neuroprosthetics

ISBN 978-1-944373-09-2 • Second edition • Synthypnion Academic, 2017 • 324 pages

How does one ensure information security for a computer that is entangled with the structures and processes of a human brain – and for the human mind that is interconnected with such a device? The need to provide information security for neuroprosthetic devices grows more pressing as increasing numbers of people utilize therapeutic technologies such as cochlear implants, retinal prostheses, robotic prosthetic limbs, and deep brain stimulation devices. Moreover, emerging neuroprosthetic technologies for human enhancement are expected to increasingly transform their human users’ sensory, motor, and cognitive capacities in ways that generate new ‘posthumanized’ sociotechnological realities. In this context, it is essential not only to ensure the information security of such neuroprostheses themselves but – more importantly – to ensure the psychological and physical health, autonomy, and personal identity of the human beings whose cognitive processes are inextricably linked with such devices. InfoSec practitioners must not only guard against threats to the confidentiality and integrity of data stored within a neuroprosthetic device’s internal memory; they must also guard against threats to the confidentiality and integrity of thoughts, memories, and desires existing within the mind the of the device’s human host.

This second edition of The Handbook of Information Security for Advanced Neuroprosthetics updates the previous edition’s comprehensive investigation of these issues from both theoretical and practical perspectives. It provides an introduction to the current state of neuroprosthetics and expected future trends in the field, along with an introduction to fundamental principles of information security and an analysis of how they must be re-envisioned to address the unique challenges posed by advanced neuroprosthetics. A two-dimensional cognitional security framework is presented whose security goals are designed to protect a device’s human host in his or her roles as a sapient metavolitional agent, embodied embedded organism, and social and economic actor. Practical consideration is given to information security responsibilities and roles within an organizational context and to the application of preventive, detective, and corrective or compensating security controls to neuroprosthetic devices, their host-device systems, and the larger supersystems in which they operate. Finally, it is shown that while implantable neuroprostheses create new kinds of security vulnerabilities and risks, they may also serve to enhance the information security of some types of human hosts (such as those experiencing certain neurological conditions).

Read more

The Diffuse Intelligent Other: An Ontology of Nonlocalizable Robots as Moral and Legal Actors

In Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov, pp. 177-98 • Farnham: Ashgate, 2016

ABSTRACT: Much thought has been given to the question of who bears moral and legal responsibility for actions performed by robots. Some argue that responsibility could be attributed to a robot if it possessed human-like autonomy and metavolitionality, and that while such capacities can potentially be possessed by a robot with a single spatially compact body, they cannot be possessed by a spatially disjunct, decentralized collective such as a robotic swarm or network. However, advances in ubiquitous robotics and distributed computing open the door to a new form of robotic entity that possesses a unitary intelligence, despite the fact that its cognitive processes are not confined within a single spatially compact, persistent, identifiable body. Such a “nonlocalizable” robot may possess a body whose myriad components interact with one another at a distance and which is continuously transforming as components join and leave the body. Here we develop an ontology for classifying such robots on the basis of their autonomy, volitionality, and localizability. Using this ontology, we explore the extent to which nonlocalizable robots—including those possessing cognitive abilities that match or exceed those of human beings—can be considered moral and legal actors that are responsible for their own actions.

Read more

Managing the Ethical Dimensions of Brain-Computer Interfaces in eHealth: An SDLC-based Approach

In 9th Annual EMAB Conference: Innovation, Entrepreneurship and Digital Ecosystems (EUROMED 2016) Book of Proceedings, edited by Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 876-90 • Engomi: EuroMed Press, 2016

ABSTRACT: A growing range of brain-computer interface (BCI) technologies is being employed for purposes of therapy and human augmentation. While much thought has been given to the ethical implications of such technologies at the ‘macro’ level of social policy and ‘micro’ level of individual users, little attention has been given to the unique ethical issues that arise during the process of incorporating BCIs into eHealth ecosystems. In this text a conceptual framework is developed that enables the operators of eHealth ecosystems to manage the ethical components of such processes in a more comprehensive and systematic way than has previously been possible. The framework’s first axis defines five ethical dimensions that must be successfully addressed by eHealth ecosystems: 1) beneficence; 2) consent; 3) privacy; 4) equity; and 5) liability. The second axis describes five stages of the systems development life cycle (SDLC) process whereby new technology is incorporated into an eHealth ecosystem: 1) analysis and planning; 2) design, development, and acquisition; 3) integration and activation; 4) operation and maintenance; and 5) disposal. Known ethical issues relating to the deployment of BCIs are mapped onto this matrix in order to demonstrate how it can be employed by the managers of eHealth ecosystems as a tool for fulfilling ethical requirements established by regulatory standards or stakeholders’ expectations. Beyond its immediate application in the case of BCIs, we suggest that this framework may also be utilized beneficially when incorporating other innovative forms of information and communications technology (ICT) into eHealth ecosystems.

Read more

Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman Socioeconomic Interaction

In Digital Ecosystems: Society in the Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp. 85-98 • Warsaw: Digital Economy Lab, University of Warsaw, 2016

ABSTRACT: For many employees, ‘work’ is no longer something performed while sitting at a computer in an office. Employees in a growing number of industries are expected to carry mobile devices and be available for work-related interactions even when beyond the workplace and outside of normal business hours. In this article it is argued that a future step will increasingly be to move work-related information and communication technology (ICT) inside the human body through the use of neuroprosthetics, to create employees who are always ‘online’ and connected to their workplace’s digital ecosystems. At present, neural implants are used primarily to restore abilities lost through injury or illness, however their use for augmentative purposes is expected to grow, resulting in populations of human beings who possess technologically altered capacities for perception, memory, imagination, and the manipulation of physical environments and virtual cyberspace. Such workers may exchange thoughts and share knowledge within posthuman cybernetic networks that are inaccessible to unaugmented human beings. Scholars note that despite their potential benefits, such neuroprosthetic devices may create numerous problems for their users, including a sense of alienation, the threat of computer viruses and hacking, financial burdens, and legal questions surrounding ownership of intellectual property produced while using such implants. Moreover, different populations of human beings may eventually come to occupy irreconcilable digital ecosystems as some persons embrace neuroprosthetic technology, others feel coerced into augmenting their brains to compete within the economy, others might reject such technology, and still others will simply be unable to afford it.

In this text we propose a model for analyzing how particular neuroprosthetic devices will either facilitate human beings’ participation in new forms of socioeconomic interaction and digital workplace ecosystems – or undermine their mental and physical health, privacy, autonomy, and authenticity. We then show how such a model can be used to create device ontologies and typologies that help us classify and understand different kinds of advanced neuroprosthetic devices according to the impact that they will have on individual human beings.

Read more

From Stand Alone Complexes to Memetic Warfare: Cultural Cybernetics and the Engineering of Posthuman Popular Culture

50 Shades of Popular Culture International Conference • Facta Ficta Research Centre, Kraków • February 19, 2016

ABSTRACT: Here we argue that five emerging social and technological trends are creating new possibilities for the instrumentalization (or even “weaponization”) of popular culture for commercial, ideological, political, or military ends and for the development of a posthuman popular culture that is no longer solely produced by or for “humanity” as presently understood. These five trends are the: 1) decentralization of the sources of popular culture, as reflected in the ability of ordinary users to create and upload content that “goes viral” within popular culture, as well as the use of “astroturfing” and paid “troll armies” by corporate or state actors to create the appearance of broad-based grassroots support for particular products, services, actions, or ideologies; 2) centralization of the mechanisms for accessing popular culture, as seen in the role of instruments like Google’s search engine, YouTube, Facebook, Instagram, and Wikipedia in concentrating the distribution channels for cultural products, as well as efforts by state actors to censor social media content perceived as threatening or disruptive; 3) personalization of popular culture, as manifested in the growth of cultural products like computer games that dynamically reconfigure themselves in response to a player’s behavior, thereby creating a different product for each individual that is adapted to a user’s unique experiences, desires, and psychological characteristics; 4) automatization of the creation of products of popular culture, as seen in the automated high-speed generation of webpages, artwork, music, memes, and computer game content by AI systems that could potentially allow venues of popular culture (such as the Internet) to be flooded with content designed to influence a social group in particular ways; and 5) virtualization of the technological systems and mechanisms for creating, transmitting, and experiencing the products of popular culture, as witnessed in the development of all-purpose nodes (such as smartphones) that are capable of handling a full range of cultural products in the form of still images, video, audio, text, and interactive experiences, and the growing digitalization of cultural products that allows them to be more easily manipulated and injected into the popular culture of other states or social groups, bypassing physical and political barriers.

While these trends are expected to yield a broad range of positive and negative impacts, we focus on a particular subset of these impacts. Namely, we argue that the convergence of these five trends opens the door for the creation of popular culture that: 1) does not exist in any permanent, tangible physical artifacts but only as a collection of continuously transforming digital data that that is stored on the servers of a few powerful corporate or state actors and is subject to manipulation or degradation as a result of computer viruses, hacking, power outages, or other factors; 2) can be purposefully and effectively engineered using techniques commonly employed within IT management, electronics engineering, marketing, and other disciplines; 3) can become a new kind of weapon and battleground in struggles for military, political, ideological, and commercial superiority on the part of corporate, state, and other actors.

In order to stimulate thinking about ways in which these trends might develop, we conclude by considering two fictional near-future worlds – those depicted in Ghost in the Shell: Stand Alone Complex and Transhuman Space: Toxic Memes – in which the further evolution of these five trends is shown as leading to the neurocybernetically facilitated manipulation of popular culture, “memetic warfare,” and related phenomena. We suggest that these fictional works represent examples of self-reflexive futurology: i.e., elements of contemporary popular culture that attempt to anticipate and explore the ways in which future popular culture could be purposefully engineered, instrumentalized, and even weaponized in the service of a diverse array of ends.

Read more

Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money that Advances Human Ethical Values

Annales. Etyka w Życiu Gospodarczym / Annales: Ethics in Economic Life 18, no. 4 (2015), pp. 85-98; MNiSW 2015 List B: 10 points

ABSTRACT: Cryptocurrencies like Bitcoin are offering new avenues for economic empowerment to individuals around the world. However, they also provide a powerful tool that facilitates criminal activities such as human trafficking and illegal weapons sales that cause great harm to individuals and communities. Cryptocurrency advocates have argued that the ethical dimensions of cryptocurrency are not qualitatively new, insofar as money has always been understood as a passive instrument that lacks ethical values and can be used for good or ill purposes. In this paper, we challenge such a presumption that money must be “value-neutral.” Building on advances in artificial intelligence, cryptography, and machine ethics, we argue that it is possible to design artificially intelligent cryptocurrencies that are not ethically neutral but which autonomously regulate their own use in a way that reflects the ethical values of particular human beings – or even entire human societies. We propose a technological framework for such cryptocurrencies and then analyze the legal, ethical, and economic implications of their use. Finally, we suggest that the development of cryptocurrencies possessing ethical as well as monetary value can provide human beings with a new economic means of positively influencing the ethos and values of their societies.

Read more

Utopias and Dystopias as Cybernetic Information Systems: Envisioning the Posthuman Neuropolity

Creatio Fantastica no. 3(50) (2015)

ABSTRACT: While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations , including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games.

Read more