Gladden, Matthew E. “‘Modal hints’ for ManaGPT: Better AI Text Generation Through Prompts Employing the Language of Possibility, Probability, and Necessity.” Medium.com, March 27, 2023.
This article’s full text can be viewed on Medium.com or LinkedIn.
Summary. The crafting of optimal input sequences (or “prompts”) for large language models is an art and a science. When using models for the purpose of text completion, a user’s goal will often be to elicit the generation of texts that are coherent, rich in complexity and detail, substantial in length, and highly relevant to the prompt’s contents. Some elements that make for effective input sequences are common across many LLMs, while others may be specific to just one model (e.g., GPT-3, GPT-NeoX, or BLOOM).
In this article, we conduct an exploratory analysis of 4,080 sentence-completion responses generated by ManaGPT-1020. This model is an LLM that has been fine-tuned on a corpus of scholarly and popular works from the domain of management and organizational foresight, with the aim of engineering a model that can produce texts containing novel insights into the emerging impact of advanced AI, social robotics, virtual reality, and other “posthumanizing” technologies on the structure of organizations and our human experience of organizational life.
More particularly, we investigate how the length and quality of texts generated by the model vary in relation to “modal hints” that are supplied by a user’s input sequences. Such hints take the form of modal verbs and phrases that suggest the degree of possibility, probability, or logical or moral necessity that a completed sentence should reflect. Our preliminary analysis suggests that such “modal shading” of prompts can have at least as great an impact on the nature of the generated sentences as the identity of the subject that a user has chosen for a given sentence.