Replies: 1 comment
-
There are some options, depending on what is exactly that's taking the time. You should profile the code to check that first. If it's the lpSum, you can manually build your constraints with LpAffineExpression or lpConstraint if you understand the exact structure that those constructors ask for. Basically you need to give explicitely the coefficient for each variable If it's getting the value of the numpy matrices, you can pre-filter them by making two dictionaries R_dict and T_dict that are indexed over j and k. Something like So your constraint would be something like: for j, k in some_list:
X[k] >= pulp.LpSum(T_dict[j, k][i]* (R_dict[j,k][i] + X[i]) for i in range(n)]) Hope that helps. |
Beta Was this translation helpful? Give feedback.
-
Hey,
I had a query, we're trying to use pulp to solve an LP problem; and I have a few numpy matrices which contain some relevant constant coefficients( say T and R) and a vector(a dict basically) of LpVariables, say X.
My constraints are of the form X[k] >= pulp.LpSum([T[i,j,k]* (R[i,j,k] + X[i]) for i in range(n)]) for all j and k
My query is, can this be made more efficient? Most of the time goes into encoding these constraints rather than solving the main problem!
Can I use np matrix directly with LpVariables vector? If yes, how? If no, how do I make it more efficient?
Beta Was this translation helpful? Give feedback.
All reactions