The implicit function theorem for a single output variable can be stated as follows:

**Single equation implicit function theorem.** Let* be a function of class on some neighborhood of a point . Suppose that and . Then there exist positive numbers such that the following conclusions are valid.*

*a. For each in the ball there is a unique such that and . We denote this by ; in particular, .*

*b. The function thus defined for is of class , and its partial derivatives are given by*

.

*Proof.* For part (a), assume without loss of generality positive . By continuity of that partial derivative, we have that in some neighborhood of it is positive and thus for some there exists such that implies that there exists a unique (by intermediate value theorem along with positivity of ) such that with , which defines some function .

To show that has partial derivatives, we must first show that it is continuous. To do so, we can let be our and use the same process to arrive at our , which corresponds to .

For part (b), to show that its partial derivatives exist and are equal to what we desire, we perturb with an that we let WLOG be

.

Then with , we have . From the mean value theorem, we can arrive at

for some . Rearranging and taking gives us

.

The following can be generalized to multiple variables, with implicit functions and constraints. ▢

**Implicit function theorem for systems of equations.** *Let be an valued functions of class on some neighborhood of a point and let . Suppose that and . Then there exist positive numbers such that the following conclusions are valid.*

*a. For each in the ball there is a unique such that and . We denote this by ; in particular, .*

*b. The function thus defined for is of class , and its partial derivatives can be computed by differentiating the equations with respect to and solving the resulting linear system of equations for .*

*Proof:* For this we will be using Cramer’s rule, which is that one can solve a linear system (provided of course that is non-singular) by taking matrix obtained from substituting the th column of with and letting be the determinant of that matrix divided by the determinant of .

From this, we are somewhat hinted that induction is in order. If is invertible, then one of its submatrices is invertible. Assume WLOG that such applies to the one determined by . With this in mind, we can via our inductive hypothesis have

determine for . Here we are making an independent variable and we can totally do that because we are inducting on the number of outputs (and also constraints). Substituting this into the constraint, this reduces to the single variable case, with

.

It suffices now to show via our hypothesis that . Routine application of the chain rule gives

The s are the solution to the following linear system:

.

Let denote the submatrix induced by . We see then that in the replacement for Cramer’s rule, we arrive at what is but with the last column swapped to the left times such that it lands in the th column and also with a negative sign, which means

.

Now, we substitute this into to get

Finally, we apply the implicit function theorem for one variable for the that remains. ▢

**References**

- Gerald B. Folland,
*Advanced Calculus*, Prentice Hall, Upper Saddle River, NJ, 2002, pp. 114–116, 420–422.