Corpus ID: 17074544

Spell-Checking Software Need a Warning Label ?

@inproceedings{Galletta2005SpellCheckingSN,
  title={Spell-Checking Software Need a Warning Label ?},
  author={Dennis F. Galletta and Alexandra Durcikova and Andrea Everard and Brian M. Jones},
  year={2005}
}
D ecades ago, as personal computers began to pry their way into our organizational lives, word processing software could barely keep up with fast typists. Today’s processors are two to four thousand times their 1MHz speed in 1980, and have data paths eight times their former size. On the road to greater speed, vendors seem to have always rushed in with more sophisticated features to use up those increasingly faster computer cycles. Taking up some of that power are formatting features that… Expand

Figures from this paper

References

SHOWING 1-10 OF 15 REFERENCES
Frequency of Formal Errors in Current College Writing
TLDR
Being fans of classical rhetoric, prosopopoeia, letteraturizzazione, and the like, as well as enthusiasts for intertextuality, plaisir de texte, differance, etc., they offer this account of their travails. Expand
The myth of the awesome thinking machine
The pervasiveness of computer technology in the per sonal and professional lives of so many Americans dur ing the last decade has created an environment in whict there are few persons who have notExpand
Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation.
TLDR
Two experiments are reported which examined operators' trust in and use of the automation in a simulated supervisory process control task and suggest that operators' subjective ratings of trust and the properties of the automate, can be used to predict and optimize the dynamic allocation of functions in automated systems. Expand
User evaluations of MIS success: what are we really measuring?
  • Dale Goodhue
  • Computer Science
  • Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences
  • 1992
TLDR
A theoretical framework is presented showing the critical constructs which lead in a causal fashion from systems and their characteristics to performance impacts at the individual level, which allows one to more clearly define and contrast the various user evaluation constructs. Expand
Credibility and computing technology
TLDR
Key terms are defined, key terms are summarized, knowledge on computer credibility is summarized, and frameworks for understanding issues in this domain are suggested. Expand
TV Personalization System
TLDR
In evaluation with users, the smart interface came out on top beating TiVo’s interface and TV Guide Magazine, in terms of usability, fun, and quick access to TV shows of interest. Expand
Is knowing more really better?: effects of system development information in human-expert system interactions
TLDR
Results indicate that system information aided in calibrating users’ confidence in accord with system reliability, but that it had little effect on users“ willingness to take expert system advice and may even hurt users” willingness to continue consulting a particular expert system. Expand
Driver Acceptance of Unreliable Traffic Information in Familiar and Unfamiliar Settings
TLDR
Results showed that 100% accurate information yielded the best driver performance and subjective opinion, but information that was 43% accurate produced powerful decrements in performance and opinion. Expand
The elements of computer credibility
TLDR
This work defines key terms relating to computer credibility, synthesize the literature in this domain, and proposes three new conceptual frameworks for better understanding the elements of computer credibility. Expand
Exposing profiles to build trust in a recommender
This paper describes a method for increasing trust in a TV show recommender. We look for people in common between programs users watch and new programs that are highly rated by our TV showExpand
...
1
2
...