A Logic-Driven Framework for Consistency of Neural Models

@article{Li2019ALF,
  title={A Logic-Driven Framework for Consistency of Neural Models},
  author={Tao Li and Vivek Gupta and Maitrey Mehta and Vivek Srikumar},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.00126}
}
While neural models show remarkable accuracy on individual predictions, their internal beliefs can be inconsistent across examples. In this paper, we formalize such inconsistency as a generalization of prediction error. We propose a learning framework for constraining models using logic rules to regularize them away from inconsistency. Our framework can leverage both labeled and unlabeled examples and is directly compatible with off-the-shelf learning schemes without model redesign. We… CONTINUE READING

Figures, Tables, and Topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 40 REFERENCES

Deep Contextualized Word Representations

VIEW 12 EXCERPTS
HIGHLY INFLUENTIAL