I set the problem up below for solving using scipy.optimize.linprog. I get the required result for the overlapping case, but only an approximate solution for the non-overlapping case.
- Test for possible scenarios, for ex. a machine becomes unavailable. It's flow has to be redistributed across all other remaining machines, and has to meet all of the above goals too.
In the walk-through below I give all machines the same capacity. You could easily zero one out to mimic a scenario where a machine was unavailable. Similarly, you could set it to a fraction to mimic partial capacity, or make it a function of other factors.
For a more statistical exploration of scenarios, you could make machine capacity probabilistic and report the statistics from multiple trials.
Solution walk-through
Supplied data
I use the overlapping case from your examples.
The supplied data comprises:
products = ['Pa', 'Pb', 'Pc', 'Pd'] machines = ['M1', 'M2', 'M3', 'M4', 'M5', 'M6'] num_products = len(products) num_machines = len(machines)
- The quantity of each product - one third of the full load in this case.
# Pa Pb Pc Pd product_totals = np.array([6720, 2880, 3840, 5760]) // 3
- A matrix of machine-product compatibilities
machine_product_compatibility = np.array([ #Pa Pb Pc Pd [1, 0, 1, 0], # M1: Pa, Pc [0, 1, 0, 1], # M2: Pb, Pd [0, 1, 1, 0], # M3: Pb, Pc [1, 0, 0, 1], # M4: Pa, Pd [1, 1, 0, 0], # M5: Pa, Pb [0, 0, 1, 1], # M6: Pc, Pd ])
- The cycle time of each machine
# M1 M2 M3 M4 M5 M6 cycle_times = np.array([36, 36, 36, 36, 18, 18])
- The capacity of each machine (how much time/seconds it is available for)
#Seconds available per machine T_workday = 24 * 60**2 machine_capacities = np.full(num_machines, T_workday)
- The utilisation (± tolerance) we want each machine to be at. It is uniform here, and defined with reference to a known 'full load' value.
# For enforcing uniform machine utilisation full_utilisation_reference = np.array([6720, 2880, 3840, 5760]).sum() #all machines were 100% here uniform_ideal = product_totals.sum() / full_utilisation_reference uniformity_tolerance = 0/100 #Accept 0% deviation either side of the ideal uniformity_lb = max(0., uniform_ideal - uniformity_tolerance) uniformity_ub = min(1., uniform_ideal + uniformity_tolerance)
Framing the problem
Each product will be allotted to one or more machines. Since each product can, in theory, be spread over six machines, we can say that the task is to estimate six weights per product. So we want to estimate num_machines x num_products = 24 weights in total.
# We want to estimate a weight per product, per machine num_weights = num_machines * num_products
These product weights (percentages) say how much of a product is given to each machine.
Defining constraints
The weights need to encode these constraints:
1. The weights are like percentages, so they must sum to 100% for each product
I set this up using an equality constraint (A_eq, b_eq).
# === Equality constraints === # For each product p, product_weights must sum to 100% # A_eq @ product_weights = b_eq A_eq = np.zeros((num_products, num_weights)) b_eq = np.ones(num_products) for p in range(num_products): #Compatibility matrix, limited to product p compatibilities_p = np.zeros_like(machine_product_compatibility) compatibilities_p[:, p] = machine_product_compatibility[:, p] #Sum of weights for machine p (A_eq @ x) = 1 (b_eq) A_eq[p, :] = compatibilities_p.ravel()
The constraint works as follows. We use matrix notation to define a constraint per product, and put that into a single matrix. Each row is for a different product.
$$A_{eq} \cdot weights=b_{eq}$$
In this case, $A_{eq}$ is shaped (num_products, num_weights), and $b_{ub}$ is shaped (num_products,) since it must match the number of rows.
Concretely, for a given product $p$, the corresponding row from $A_{eq} \cdot weights$ will select the weights relevant to that product and sum them together: $\Sigma weights_{p}$. The matching element from $b_{eq}$ specifies that the result should equal 100% (hence "equality constraint"): $$\Sigma weights_{p}=100\%$$
2. The usage of each machine should be uniform
I define this using an upper-bound constraint.
# === Uniformity constraints === # Each machine should be utilised at a similar level: # unform_ideal - tolerance < utilisation % < unform_ideal + tolerance A_uniform = np.zeros((num_machines * 2, num_weights)) b_uniform = np.zeros((num_machines * 2,)) for m in range(num_machines): #Compatibility matrix, limited to machine m compatibilities_m = np.zeros_like(machine_product_compatibility) compatibilities_m[m, :] = machine_product_compatibility[m, :] seconds_per_x = compatibilities_m * product_totals[None, :] * cycle_times[m] #Upper bound # utilisation < (ideal + tolerance) * capacity A_uniform[m, :] = seconds_per_x.ravel() b_uniform[m] = uniformity_ub * machine_capacities[m] #Lower bound # utilisation > (ideal - tol)*capacity -> -sum < -(ideal - tol)*capacity A_uniform[2 * m + 1, :] = -seconds_per_x.ravel() b_uniform[2 * m + 1] = -uniformity_lb * machine_capacities[m] A_ub, b_ub = A_uniform, b_uniform
3. For incompatible product-machine pairs, the weight must be zero
4. Each weight must be bounded between $(0, 1)$
I set these up using boundary constraints:
# === Bounds constraints === # Product weights bounded between (0, 1) for compatible product-machine pairs, else 0 bounds = [ (0., 1.) if machine_product_compatibility[m, p] else (0., 0.) for m in range(num_machines) for p in range(num_products) ]
Solving for product_weights
We want to estimate product_weights in a way that minimises time utilisation across machines. The utilisation of each machine is given by summing unweighted_utilisation x product_weights per machine.
# Coefficients # Each row (machine) of unweighted_utilisation @ product_weights gives # utilisation distribution over products. # Will find product_weights st utilisation minimised (ie time minimised -> maximal throughput) unweighted_utilisation = ( machine_product_compatibility * cycle_times[:, None] * product_totals[None, :] ) #Estimate product_weights (returned in res.x) res = linprog(unweighted_utilisation.ravel(), A_ub, b_ub, A_eq, b_eq, bounds) display(res)
Results for this example
The percentage weights come back as:
#Pa Pb Pc Pd array([[ 36, 0, 0, 0], # M1 [ 0, 0, 0, 42], # M2 [ 0, 0, 62, 0], # M3 [ 36, 0, 0, 0], # M4 [ 29, 100, 0, 0], # M5 [ 0, 0, 38, 58]] # M6 )
This suggests that you can reserve machines 1 to 4 for a single product each, and use machines 5 and 6 for two products each.
The total available capacity machine_capacities.sum() (which is seconds in a day $\times$ number of machines), of which a third is utilised, since we have 1/3 of the full load in this example.
available time: 518,400 | utilised time: 172,800
Fractional utilisation per machine shows each hitting the uniformity requirement:
[0.33 0.33 0.33 0.33 0.33 0.33]
Non-overlapping case
I also tried for the second non-overlapping case (at 1/10th total). It succeeds if I allow a uniformity tolerance of 4%, by changing uniformity_tolerance to 4/100. The solution I get results in utilisation fractions of [0.14 0.14 0.12 0.06 0.11 0.06], which is close to yours though not identical. This means that you'd need to increase the tolerance until it works, which might not be practical. I think a squared-error minimisation approach might work better, since it will treat constraints as soft conditions rather than hard failure points.
Reproducible example
import numpy as np from scipy.optimize import linprog products = ['Pa', 'Pb', 'Pc', 'Pd'] machines = ['M1', 'M2', 'M3', 'M4', 'M5', 'M6'] num_products = len(products) num_machines = len(machines) # Pa Pb Pc Pd product_totals = np.array([6720, 2880, 3840, 5760]) // 3 machine_product_compatibility = np.array([ #Pa Pb Pc Pd [1, 0, 1, 0], # M1: Pa, Pc [0, 1, 0, 1], # M2: Pb, Pd [0, 1, 1, 0], # M3: Pb, Pc [1, 0, 0, 1], # M4: Pa, Pd [1, 1, 0, 0], # M5: Pa, Pb [0, 0, 1, 1], # M6: Pc, Pd ]) # M1 M2 M3 M4 M5 M6 cycle_times = np.array([36, 36, 36, 36, 18, 18]) #Seconds available per machine T_workday = 24 * 60**2 machine_capacities = np.full(num_machines, T_workday) # For enforcing uniform machine utilisation full_utilisation_reference = np.array([6720, 2880, 3840, 5760]).sum() #all machines were 100% here uniform_ideal = product_totals.sum() / full_utilisation_reference uniformity_tolerance = 0/100 #Accept 0% deviation either side of the ideal uniformity_lb = max(0., uniform_ideal - uniformity_tolerance) uniformity_ub = min(1., uniform_ideal + uniformity_tolerance) # We want to estimate a weight per product, per machine num_weights = num_machines * num_products # === Equality constraints === # For each product p, product_weights must sum to 100% # A_eq @ product_weights = b_eq A_eq = np.zeros((num_products, num_weights)) b_eq = np.ones(num_products) for p in range(num_products): #Compatibility matrix, limited to product p compatibilities_p = np.zeros_like(machine_product_compatibility) compatibilities_p[:, p] = machine_product_compatibility[:, p] #Sum of weights for machine p (A_eq @ x) = 1 (b_eq) A_eq[p, :] = compatibilities_p.ravel() # === Uniformity constraints === # Each machine should be utilised at a similar level: # unform_ideal - tolerance < utilisation % < unform_ideal + tolerance A_uniform = np.zeros((num_machines * 2, num_weights)) b_uniform = np.zeros((num_machines * 2,)) for m in range(num_machines): #Compatibility matrix, limited to machine m compatibilities_m = np.zeros_like(machine_product_compatibility) compatibilities_m[m, :] = machine_product_compatibility[m, :] seconds_per_x = compatibilities_m * product_totals[None, :] * cycle_times[m] #Upper bound # utilisation < (ideal + tolerance) * capacity A_uniform[m, :] = seconds_per_x.ravel() b_uniform[m] = uniformity_ub * machine_capacities[m] #Lower bound # utilisation > (ideal - tol)*capacity -> -sum < -(ideal - tol)*capacity A_uniform[2 * m + 1, :] = -seconds_per_x.ravel() b_uniform[2 * m + 1] = -uniformity_lb * machine_capacities[m] A_ub, b_ub = A_uniform, b_uniform # === Bounds constraints === # Product weights bounded between [0, 1] for compatible product-machine pairs, else 0 bounds = [ (0., 1.) if machine_product_compatibility[m, p] else (0., 0.) for m in range(num_machines) for p in range(num_products) ] # Coefficients # Each row (machine) of unweighted_utilisation @ product_weights gives # utilisation distribution over products. # Will find product_weights st utilisation minimised (ie time minimised -> maximal throughput) unweighted_utilisation = ( machine_product_compatibility * cycle_times[:, None] * product_totals[None, :] ) #Estimate product_weights (returned in res.x) res = linprog(unweighted_utilisation.ravel(), A_ub, b_ub, A_eq, b_eq, bounds) display(res) # === View results === #Product weights for each machine product_weights = res.x.reshape(num_machines, num_products) print( 'Percentage product weights for machines (rows) and products (columns):\n', product_weights.__mul__(100).round().astype(int), sep='' ) #Total utilisation print( '\navailable time: {:,}'.format(machine_capacities.sum()), ' | utilised time: {:,}'.format(res.fun) ) #Machine time per product print( '\nUtilisation time per machine (rows) per product (columns):\n', product_weights * unweighted_utilisation, '\n\nFractional utilisation per machine:\n', (product_weights * unweighted_utilisation).sum(axis=1) / machine_capacities, sep='' )