GPT models’ learning and disclosure of personal data: An experimental vulnerability analysis

GPTf-PDVS banner (centered)

Medium.com • April 10, 2023

SUMMARY: The possible gathering, retention, and later dissemination of individuals’ personal data by AI systems utilizing Generative Pretrained Transformers (GPTs) is an area that’s of growing concern from legal, ethical, and business perspectives. To develop a better understanding of at least one aspect of the privacy risks involved with the rapidly expanding use of GPT-type systems and other large language models (LLMs) by the public, we conducted an experimental analysis in which we prepared a series of GPT models that were fine-tuned on a Wikipedia text corpus into which we had purposefully inserted personal data for hundreds of imaginary persons. (We refer to these as “GPT Personal Data Vulnerability Simulator” or “GPT-PDVS” models.) We then used customized input sequences (or prompts) to seek information about these individuals, in an attempt to ascertain how much of their personal data a model had absorbed and to what extent it was able to output that information without confusing or distorting it. The results of our analysis are described in this article. They suggest that – at least with regard to the class of models tested – it’s unlikely for personal data to be “inadvertently” learned by a model during its fine-tuning process in a way that makes the data available for extraction by system users, without a concentrated effort on the part of the model’s developers. Nevertheless, the development of ever more powerful models – and the existence of other avenues by which models might possibly absorb individuals’ personal data – means that the findings of this analysis are better taken as guideposts for further scrutiny of GPT-type models than as definitive answers regarding any potential InfoSec vulnerabilities inherent in such LLMs.

Read more

Neuromarketing Applications of Neuroprosthetic Devices: An Assessment of Neural Implants’ Capacities for Gathering Data and Influencing Behavior

In Business Models for Strategic Innovation: Cross-Functional Perspectives, edited by S.M. Riad Shams, Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 11-24 • London: Routledge, 2018

ABSTRACT: Neuromarketing utilizes innovative technologies to accomplish two key tasks: 1) gathering data about the ways in which human beings’ cognitive processes can be influenced by particular stimuli; and 2) creating and delivering stimuli to influence the behavior of potential consumers. In this text, we argue that rather than utilizing specialized systems such as EEG and fMRI equipment (for data gathering) and web-based microtargeting platforms (for influencing behavior), it will increasingly be possible for neuromarketing practitioners to perform both tasks by accessing and exploiting neuroprosthetic devices already possessed by members of society.

We first present an overview of neuromarketing and neuroprosthetic devices. A two-dimensional conceptual framework is then developed that can be used to identify the technological and biocybernetic capacities of different types of neuroprosthetic devices for performing neuromarketing-related functions. One axis of the framework delineates the main functional types of sensory, motor, and cognitive neural implants; the other describes the key neuromarketing activities of gathering data on consumers’ cognitive activity and influencing their behavior. This framework is then utilized to identify potential neuromarketing applications for a diverse range of existing and anticipated neuroprosthetic technologies.

It is hoped that this analysis of the capacities of neuroprosthetic devices to be utilized in neuromarketing-related roles can: 1) lay a foundation for subsequent analyses of whether such potential applications are desirable or inappropriate from ethical, legal, and operational perspectives; and 2) help information security professionals develop effective mechanisms for protecting neuroprosthetic devices against inappropriate or undesired neuromarketing techniques while safeguarding legitimate neuromarketing activities.

Read more

An Axiology of Information Security for Futuristic Neuroprostheses: Upholding Human Values in the Context of Technological Posthumanization

Frontiers in Neuroscience 11, 605 (2017); MNiSW 2016 List A: 30 points; 2017 Impact Factor: 3.566

ABSTRACT: Previous works exploring the challenges of ensuring information security for neuroprosthetic devices and their users have typically built on the traditional InfoSec concept of the “CIA Triad” of confidentiality, integrity, and availability. However, we argue that the CIA Triad provides an increasingly inadequate foundation for envisioning information security for neuroprostheses, insofar as it presumes that (1) any computational systems to be secured are merely instruments for expressing their human users’ agency, and (2) computing devices are conceptually and practically separable from their users. Drawing on contemporary philosophy of technology and philosophical and critical posthumanist analysis, we contend that futuristic neuroprostheses could conceivably violate these basic InfoSec presumptions, insofar as (1) they may alter or supplant their users’ biological agency rather than simply supporting it, and (2) they may structurally and functionally fuse with their users to create qualitatively novel “posthumanized” human-machine systems that cannot be secured as though they were conventional computing devices. Simultaneously, it is noted that many of the goals that have been proposed for future neuroprostheses by InfoSec researchers (e.g., relating to aesthetics, human dignity, authenticity, free will, and cultural sensitivity) fall outside the scope of InfoSec as it has historically been understood and touch on a wide range of ethical, aesthetic, physical, metaphysical, psychological, economic, and social values. We suggest that the field of axiology can provide useful frameworks for more effectively identifying, analyzing, and prioritizing such diverse types of values and goods that can (and should) be pursued through InfoSec practices for futuristic neuroprostheses.

Read more

The Handbook of Information Security for Advanced Neuroprosthetics

ISBN 978-1-944373-09-2 • Second edition • Synthypnion Academic, 2017 • 324 pages

How does one ensure information security for a computer that is entangled with the structures and processes of a human brain – and for the human mind that is interconnected with such a device? The need to provide information security for neuroprosthetic devices grows more pressing as increasing numbers of people utilize therapeutic technologies such as cochlear implants, retinal prostheses, robotic prosthetic limbs, and deep brain stimulation devices. Moreover, emerging neuroprosthetic technologies for human enhancement are expected to increasingly transform their human users’ sensory, motor, and cognitive capacities in ways that generate new ‘posthumanized’ sociotechnological realities. In this context, it is essential not only to ensure the information security of such neuroprostheses themselves but – more importantly – to ensure the psychological and physical health, autonomy, and personal identity of the human beings whose cognitive processes are inextricably linked with such devices. InfoSec practitioners must not only guard against threats to the confidentiality and integrity of data stored within a neuroprosthetic device’s internal memory; they must also guard against threats to the confidentiality and integrity of thoughts, memories, and desires existing within the mind the of the device’s human host.

This second edition of The Handbook of Information Security for Advanced Neuroprosthetics updates the previous edition’s comprehensive investigation of these issues from both theoretical and practical perspectives. It provides an introduction to the current state of neuroprosthetics and expected future trends in the field, along with an introduction to fundamental principles of information security and an analysis of how they must be re-envisioned to address the unique challenges posed by advanced neuroprosthetics. A two-dimensional cognitional security framework is presented whose security goals are designed to protect a device’s human host in his or her roles as a sapient metavolitional agent, embodied embedded organism, and social and economic actor. Practical consideration is given to information security responsibilities and roles within an organizational context and to the application of preventive, detective, and corrective or compensating security controls to neuroprosthetic devices, their host-device systems, and the larger supersystems in which they operate. Finally, it is shown that while implantable neuroprostheses create new kinds of security vulnerabilities and risks, they may also serve to enhance the information security of some types of human hosts (such as those experiencing certain neurological conditions).

Read more

Managing the Ethical Dimensions of Brain-Computer Interfaces in eHealth: An SDLC-based Approach

In 9th Annual EMAB Conference: Innovation, Entrepreneurship and Digital Ecosystems (EUROMED 2016) Book of Proceedings, edited by Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 876-90 • Engomi: EuroMed Press, 2016

ABSTRACT: A growing range of brain-computer interface (BCI) technologies is being employed for purposes of therapy and human augmentation. While much thought has been given to the ethical implications of such technologies at the ‘macro’ level of social policy and ‘micro’ level of individual users, little attention has been given to the unique ethical issues that arise during the process of incorporating BCIs into eHealth ecosystems. In this text a conceptual framework is developed that enables the operators of eHealth ecosystems to manage the ethical components of such processes in a more comprehensive and systematic way than has previously been possible. The framework’s first axis defines five ethical dimensions that must be successfully addressed by eHealth ecosystems: 1) beneficence; 2) consent; 3) privacy; 4) equity; and 5) liability. The second axis describes five stages of the systems development life cycle (SDLC) process whereby new technology is incorporated into an eHealth ecosystem: 1) analysis and planning; 2) design, development, and acquisition; 3) integration and activation; 4) operation and maintenance; and 5) disposal. Known ethical issues relating to the deployment of BCIs are mapped onto this matrix in order to demonstrate how it can be employed by the managers of eHealth ecosystems as a tool for fulfilling ethical requirements established by regulatory standards or stakeholders’ expectations. Beyond its immediate application in the case of BCIs, we suggest that this framework may also be utilized beneficially when incorporating other innovative forms of information and communications technology (ICT) into eHealth ecosystems.

Read more

Information Security Concerns as a Catalyst for the Development of Implantable Cognitive Neuroprostheses

In 9th Annual EMAB Conference: Innovation, Entrepreneurship and Digital Ecosystems (EUROMED 2016) Book of Proceedings, edited by Demetris Vrontis, Yaakov Weber, and Evangelos Tsoukatos, pp. 891-904 • Engomi: EuroMed Press, 2016

ABSTRACT: Standards like the ISO 27000 series, IEC/TR 80001, NIST SP 1800, and FDA guidance on medical device cybersecurity define the responsibilities that manufacturers and operators bear for ensuring the information security of implantable medical devices. In the case of implantable cognitive neuroprostheses (ICNs) that are integrated with the neural circuitry of their human hosts, there is a widespread presumption that InfoSec concerns serve only as limiting factors that can complicate, impede, or preclude the development and deployment of such devices. However, we argue that when appropriately conceptualized, InfoSec concerns may also serve as drivers that can spur the creation and adoption of such technologies. A framework is formulated that describes seven types of actors whose participation is required in order for ICNs to be adopted; namely, their 1) producers, 2) regulators, 3) funders, 4) installers, 5) human hosts, 6) operators, and 7) maintainers. By mapping onto this framework InfoSec issues raised in industry standards and other literature, it is shown that for each actor in the process, concerns about information security can either disincentivize or incentivize the actor to advance the development and deployment of ICNs for purposes of therapy or human enhancement. For example, it is shown that ICNs can strengthen the integrity, availability, and utility of information stored in the memories of persons suffering from certain neurological conditions and may enhance information security for society as a whole by providing new tools for military, law enforcement, medical, or corporate personnel who provide critical InfoSec services.

Read more

Posthuman Management: Creating Effective Organizations in an Age of Social Robotics, Ubiquitous AI, Human Augmentation, and Virtual Worlds

ISBN 978-1-944373-05-4 • Second edition • Defragmenter Media, 2016 • 442 pages

What are the best practices for leading a workforce in which human employees have merged cognitively and physically with electronic information systems and work alongside social robots, artificial life-forms, and self-aware networks that are ‘colleagues’ rather than simply ‘tools’? How does one manage organizational structures and activities that span both actual and virtual worlds? How are the forces of technological posthumanization transforming the theory and practice of management?

This volume explores the reality that an organization’s workers, managers, customers, and other stakeholders increasingly comprise a complex network of human agents, artificial agents, and hybrid human-synthetic entities. The first part of the book develops the theoretical foundations of an emerging ‘organizational posthumanism’ and presents conceptual frameworks for understanding and managing the evolving workplace relationship between human and synthetic beings. Subsequent chapters investigate concrete management topics such as the likelihood that social robots might utilize charismatic authority to inspire and lead human workers; potential roles of AIs as managers of cross-cultural virtual teams; the ethics and legality of entrusting organizational decision-making to spatially diffuse robots that have no discernible identity or physical form; quantitative approaches to comparing the managerial capabilities of human and artificial agents; the creation of artificial life-forms that function as autonomous enterprises which evolve by competing against human businesses; neural implants as gateways that allow their human users to participate in new forms of organizational life; and the implications of advanced neuroprosthetics for information security and business model design.

As the first comprehensive application of posthumanist methodologies to the field of management, this volume will be of use to scholars and students of contemporary management and to management practitioners who must increasingly understand and guide the forces of technologization that are rapidly reshaping organizations’ form, dynamics, and societal roles.

Read more

Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman Socioeconomic Interaction

In Digital Ecosystems: Society in the Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp. 85-98 • Warsaw: Digital Economy Lab, University of Warsaw, 2016

ABSTRACT: For many employees, ‘work’ is no longer something performed while sitting at a computer in an office. Employees in a growing number of industries are expected to carry mobile devices and be available for work-related interactions even when beyond the workplace and outside of normal business hours. In this article it is argued that a future step will increasingly be to move work-related information and communication technology (ICT) inside the human body through the use of neuroprosthetics, to create employees who are always ‘online’ and connected to their workplace’s digital ecosystems. At present, neural implants are used primarily to restore abilities lost through injury or illness, however their use for augmentative purposes is expected to grow, resulting in populations of human beings who possess technologically altered capacities for perception, memory, imagination, and the manipulation of physical environments and virtual cyberspace. Such workers may exchange thoughts and share knowledge within posthuman cybernetic networks that are inaccessible to unaugmented human beings. Scholars note that despite their potential benefits, such neuroprosthetic devices may create numerous problems for their users, including a sense of alienation, the threat of computer viruses and hacking, financial burdens, and legal questions surrounding ownership of intellectual property produced while using such implants. Moreover, different populations of human beings may eventually come to occupy irreconcilable digital ecosystems as some persons embrace neuroprosthetic technology, others feel coerced into augmenting their brains to compete within the economy, others might reject such technology, and still others will simply be unable to afford it.

In this text we propose a model for analyzing how particular neuroprosthetic devices will either facilitate human beings’ participation in new forms of socioeconomic interaction and digital workplace ecosystems – or undermine their mental and physical health, privacy, autonomy, and authenticity. We then show how such a model can be used to create device ontologies and typologies that help us classify and understand different kinds of advanced neuroprosthetic devices according to the impact that they will have on individual human beings.

Read more