GPT-3: Its Nature, Scope, Limits, and Consequences

@article{Floridi2020GPT3IN,
  title={GPT-3: Its Nature, Scope, Limits, and Consequences},
  author={L. Floridi and Massimo Chiriatti},
  journal={Minds and Machines},
  year={2020},
  volume={30},
  pages={681-694}
}
In this commentary, we discuss the nature of reversible and irreversible questions, that is, questions that may enable one to identify the nature of the source of their answers. We then introduce GPT-3, a third-generation, autoregressive language model that uses deep learning to produce human-like texts, and use the previous distinction to analyse it. We expand the analysis to present three tests based on mathematical, semantic (that is, the Turing Test), and ethical questions and show that GPT… 

Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models

This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such

Intensional Artificial Intelligence: From Symbol Emergence to Explainable and Empathetic AI

It is argued that an explainable artificial intelligence must possess a rationale for its decisions, be able to infer the purpose of observed behaviour, and be can to explain its decisions in the context of what its audience understands and intends, and a theory of meaning is proposed in which an agent should model the world a language describes rather than the language itself.

GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry

Although OpenAI’s newest 2022 language model InstructGPT represents a small step in reducing toxic language and aligning GPT-3 with user intent, it does not provide any compelling solutions to manipulation or bias, and it is argued that solutions to address these issues must focus on organisational settings as a precondition for ethical decision-making in AI, and high-quality curated datasets as a precursor for less harmful language model outputs.

The great Transformer: Examining the role of large language models in the political economy of AI

The role LLMs play in the political economy of AI as infrastructural components for AI research and development is explored, pointing out how they are intertwined with the business model of big tech companies and further shift power relations in their favour.

Compression, The Fermi Paradox and Artificial Super-Intelligence

The following briefly discusses possible difficulties in communication with and control of an AGI (artificial general intelligence), building upon an explanation of The Fermi Paradox and preceding

Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models

A mechanism to harness the narrative output of large language models and produce diagrams or “maps” of the relationships that are latent in the weights of such models as the GPT-3 to provide insight into the organization of information, opinion, and belief in the model, which in turn provide means to understand intent and response in the context of physical distance.

Self-recognition in conversational agents

Methodology here constructs a textual version of the mirror test by placing the agent as the one and only judge to figure out whether the contacted one is an other, a mimicker, or oneself in an unsupervised manner, which is objective, self-contained, and devoid of humanness.

Do Artificial Intelligence Systems Understand?

The conclusion states that it is not necessary to attribute understanding to a machine in order to explain its exhibited “intelligent” behavior; a merely syntactic and mechanistic approach to intelligence as a task-solving tool is needed to justify the range of operations that it can display in the current state of technological development.

Human-Machine Duality: What’s Next In Cognitive Aspects Of Artificial Intelligence?

The goal of the paper is to find means for the unification of human-machine duality in collective behavior of people and machines, by conciliating approaches that proceed in opposite directions. The

Essential Features in a Theory of Context for Enabling Artificial General Intelligence

The myriad pragmatic ways in which context has been used, or implicitly assumed, as a core concept in multiple AI sub-areas, such as representation learning and commonsense reasoning are synthesized.
...

References

SHOWING 1-10 OF 49 REFERENCES

The Radicalization Risks of GPT-3 and Advanced Neural Language Models

GPT-3 demonstrates significant improvement over its predecessor, GPT-2, in generating extremist texts and its strength in generating text that accurately emulates interactive, informational, and influential content that could be utilized for radicalizing individuals into violent far-right extremist ideologies and behaviors.

What the Near Future of Artificial Intelligence Could Be

  • L. Floridi
  • Philosophy
    The 2019 Yearbook of the Digital Ethics Lab
  • 2020
In this article, I shall argue that AI’s likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered

Can GPT-3 Pass a Writer’s Turing Test?

GPT-3 can internalize the rules of language without explicit programming or rules, and can sometimes fail at the simplest of linguistic tasks, but it can also excel at more difficult ones like imitating an author or waxing philosophical.

Common Sense, the Turing Test, and the Quest for Real AI

Hector Levesque considers the role of language in learning, and identifies a possible mechanism behind common sense and the capacity to call on background knowledge: the ability to represent objects of thought symbolically.

The Winograd Schema Challenge

This paper presents an alternative to the Turing Test that has some conceptual and practical advantages, and English-speaking adults will have no difficulty with it, and the subject is not required to engage in a conversation and fool an interrogator into believing she is dealing with a person.

Artificial Intelligence, Deepfakes and a Future of Ectypes

The art world is full of reproductions. Some are plain replicas, for example the Mona Lisa. Others are fakes or forgeries, like the BVermeers^ painted by Han van Meegeren that sold for $60 million

Digital’s Cleaving Power and Its Consequences

The digital is deeply transforming reality. This is much obvious and uncontroversial. The real questions are why, how, and so what. In each case, the answer is far from trivial and definitely open to

The Fourth Revolution: How the infosphere is reshaping human reality

Preface Acknowledgements List of figures 1. Hyperhistory 2. Space: Infosphere 3. Identity: Onlife 4. Self-Understanding: The Four Revolutions 5. Privacy: Informational Friction 6. Intelligence:

Turing’s Imitation Game: Still an Impossible Challenge for All Machines and Some Judges––An Evaluation of the 2008 Loebner Contest

An evaluation of the 2008 Loebner contest finds that the number of entries in the final round was higher than in the previous two contests, but the quality of the entries was lower than in previous contests.

The Onlife Manifesto: Being Human in a Hyperconnected Era

Let's call digital transition the societal process arising from the deployment and uptake of ICTs. Indeed, with the current multiplication of devices, sensors, robots, and applications, and these