We have recently developed a general-purpose non linear system solver environment for complex physics computations on unstructured grids. This environment, named CartaBlanca, was described at last year's Java Grande Forum. CartaBlanca employs a finite-volume method, . The solution of the non linear algebraic systems, arising from representing the governing partial differential-equations on a discrete grid, utilizes the Jacobian-Free Newton Krylov method, . Finally, CartaBlanca uses Java's built-in thread facility for shared-memory parallelization. The advent and popularity of clusters of workstations stimulated the development of a large body of methods that allow the execution of a Java program on a distributed memory system. We have been able to use one of these methods, JavaParty, to perform parallel calculations on distributed-memory clusters. JavaParty allows declaring classes as remote and creating simultaneously more than one virtual machine, each of which is able to access remote classes and their instances. In addition, in the JavaParty environment, the access of a remote class is syntactically identical to the access of a regular Java class as if we had only a virtual machine distributed over several computers. Thus, only two changes had to be implemented to our code. First, we had to declare the communication objects as remote. Second, we used JavaParty's remote threads instead of using Java's native threads, . We demonstrate CartaBlancas's parallel performance on two prototypical physics problems: heat transfer and multiphase flow. Briefly, the heat transfer problem solves the transient heat equation on a square domain. The initial temperature distribution changes from zero to one in the x direction and has no gradient component in the y direction. The solution to the problem gives a temperature that relaxes to a steady state with a uniform value of one-half. The multiphase flow problem simulates a broken-dam flow wherein a fluid such as water is initially confined to the right half of a square domain with another fluid such as air the left half. At zero time, gravity is "turned on" and the water slumps and flows to fill the bottom half of the domain. We performed both shared memory and distributed memory parallel scaling tests. For the shared memory tests, we used an 8-processor Intel SMP machine with 900 MHz chips. For the distributed-memory tests, we used a Linux cluster distributed-memory machine with 1 Ghz Intel chips with two chips per cluster node and 1 Gbit/sec Ethernet interconnects. We achieve similar scalability for both shared memory computers and distributed-memory clusters.