Here are some comments on the parts of the midterm you had some difficulties with. If there are remaining questions, please come and see me. (I don't like to give out full solutions for questions I might want to use in a possibly modified way again.) 1. Most of you got the primal and dual correct, but you should have inequality constraints in nonnegative variables. c) The advantage of the dual is that all the objective function coefficients are positive, so it seems reasonable to make the variables as small as possible. Usually, this doesn't work, because there can be tradeoffs between one variable and another. Here, note: making a z_t small actually *helps* in making all earlier y_s's small. It does seem to make it harder to make z_s's for s= p_T + z_T, so set y_T to max{0,p_T+z_T}. For t = T-1, T-2, ..., 1, set similarly z_t = max{0, -c_t - sum_{t+1 <= i <= T} (z_i - y_i)} and then y_t = max{0, p_t + z_t + sum_{t+1 <= i <= T} (z_i - y_i)}. This gives a (hopefully) optimal solution to the dual. Note that it is feasible. The choice of each variable guarantees that it is nonnegative and satisfies the appropriate constraint. d) Now we choose a primal solution to satisfy complementary slackness. Note that only w_1 appears in the constraint w_1 <= I, which corresponds to the dual variable y_1. So if y_1 > 0, set w_1 = I. What if y_1 = 0? Then, because of the way y_1 was chosen, it will likely be the case that y_1 > p_1 + z_1 + sum ..., in which case we have strict inequality in the dual constraint corresponding to the variable w_1, so we should set w_1 = 0. What if there is a tie (this means we have dual degeneracy)? The we can set w_1 to any feasible value, but we'll set it to 0 since this would be right if p_1 was infinitesimally smaller. Next we consider z_1. If it's positive, then by complementary slackness we want x_1 = C - I + w_1. If not, there's likely slack in the dual constraint corresponding to x_1, so we set x_1 = 0. For t = 1, 2, ..., T, set w_t = I + sum_{1<= i <= t-1} (x_i - w_i) if y_t is positive, = 0 ow; x_t = C-I + w_t - sum_{1<= i <= t-1} (x_i - w_i) if z_t is positive, 0 ow. This gives a (hopefully) optimal solution to the primal. e) We have a (suspected optimal) feasible solution to the dual, and a primal solution that satisfies complementary slackness with it. They will both be optimal as long as the primal solution is feasible, and this needs to be checked. We need to be sure that the quantities I + sum_{1<= i <= t-1} (x_i - w_i) and C-I + w_t - sum_{1<= i <= t-1} (x_i - w_i) are nonnegative. Then if w_t (x_t) is set to this, that constraint is satisfied with equality and the variable is nonnegative; otherwise the variable is zero and the constraint is satisfied. We can show that these quantities are nonnegative by induction, starting with t = 1. Notice that this solution is feasible because the right-hand sides I and C-I are nonnegative; so primal feasibility follows from this, as did our rules for choosing the y's and z's to get good (optimal) values in the dual. 3(b). If Q is bounded and nonempty, then the LP max {e^T x : Ax = b, x >= 0 } has an optimal solution, and then so does its dual by strong duality. So we get y with A^T y >= e (if you wrote the LP as min -e^T x, you get z with A^T z <= - e: set y = -z to get y as above). This means that every component of A^T y is positive, since it's at least one; and for every feasible x, (A^T y)^T x = y^T Ax = b^T y, so this is an equality constraint with all coefficients positive. You can just add this row to the bottom of Ax = b and get the same feasible region. Or, if A had full row rank, or to keep the same number of equality constraints, you can add this row and then remove the ith row, for any i with y_i nonzero. Then the remaining rows together with the new row imply that the ith equation is still satisfied. 4(a). Note that, if the columns of the original coefficient matrix are called a_j, j = 1, 2, ..., 6, we have a_1 = (2;1;1), a_2 = (1;1;1), and a_4 = (1;0;0) = a_1 - a_2 (only subtraction is needed to check this). The coefficients of x_4 below the \zeta-row are the entries of B^{-1} a_4. But this is just B^{-1} a_1 - B^{-1} a_2, and these columns are there for you to grab. So you get the second and third ??s are 1 - .4 = 0.6 and 0 - .2 = -0.2, again using only subtraction. The top ?? is - bar c_4 = a_4^T y - c_4. You have - bar c_1 and - bar c_2, and from these you can get a_1^T y and a_2^T y. Then a subtraction gets you a_4^T y and another gives you the top ??. Done!