• Corpus ID: 253265624

Taking the Intentional Stance Seriously, or"Intending"to Improve Cognitive Systems

@inproceedings{Bridewell2022TakingTI,
  title={Taking the Intentional Stance Seriously, or"Intending"to Improve Cognitive Systems},
  author={Will Bridewell and Will Bridewell},
  year={2022}
}
Finding claims that researchers have made considerable progress in artificial intelligence over the last several decades is easy. However, our everyday interactions with cognitive systems (e.g., Siri, Alexa, DALL-E) quickly move from intriguing to frustrating. One cause of those frustrations rests in a mismatch between the expectations we have due to our inherent, folk-psychological theories and the real limitations we experience with existing computer programs. The software does not understand… 

Tables from this paper

References

SHOWING 1-10 OF 35 REFERENCES

The Folk Concept of Intentionality

Abstract When perceiving, explaining, or criticizing human behavior, people distinguish between intentional and unintentional actions. To do so, they rely on a shared folk concept of intentionality.

Two Faces of Intention

We do things intentionally, and we intend to do things. Our commonsense psychology uses the notion of intention to characterize both our actions and our mental states: I might intentionally start my

Plans and the Structure of Behavior

which is used to describe a third component of thinking processes, particularly preverbal, and it denotes the concept that the world is activated by some generalized "energy" that links together

Implementation intentions: Strong effects of simple plans.

When people encounter problems in translating their goals into action (e.g., failing to get started, becoming distracted, or falling into bad habits), they may strategically call on automatic

The Stakeholder Playbook for Explaining AI Systems

The purpose of the Stakeholder Playbook is to enable system developers to take into account the different ways in which stakeholders need to "look inside" of the AI/XAI systems. Recent work on

Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program

In the journey through the DARPAXAI program, several "big picture" takeaways are discussed: many seemingly high performing RL agents are extremely brittle and are not amendable to explanation, causal models allow for rich explanations, but how to present them isn’t always straightforward.

Apophatic science: how computational modeling can explain consciousness

A new method for validating computational models that treats them as providing negative data on consciousness: data about what consciousness is not is introduced, designed to support a quantitative science of consciousness while avoiding metaphysical commitments.

Intention is Choice with Commitment

How People Explain Action (and Autonomous Intelligent Systems Should Too)

This thesis is that people will regard most AIS as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanation to them.

Analyzing Intention in Utterances