A large number of nonlinear optimization problems involve bilinear, quadratic and/or polynomial functions in their objective function and/or constraints. In this paper, a theoretical approach is proposed for global optimization in constrained nonconvex NLP problems. The original nonconvex problem is decomposed into primal and relaxed dual subproblems by introducing new transformation variables if necessary and partitioning of the resulting variable set. The decomposition is designed to provide valid upper and lower bounds on the global optimum through the solutions of the primal and relaxed dual subproblems respectively. New theoretical results are presented that enable the rigorous solution of the relaxed dual problem. The approach is used in the development of a Global OPtimization algorithm (GOP). The algorithm is proved to attain nite-convergence and-global optimality. An example problem is used to illustrate the GOP algorithm both computationally and geometrically. In an accompanying paper (Visweswaran and Floudas, 1990), application of the theory and the GOP algorithm to various classes of optimization problems, as well as computational results of the approach are provided.