Private API for LearningPi.jl

LearningPi.GraphormerType

This structure provide an implementation of the main neural-network model of this project.

Fields:

  • HiddenMap: a Chain to map the features into the Hidden Space,
  • Graphormers: a GNNChain to apply several Graphormer Blocks to the hidden-features representation, each component can be seen as main-block of the model,
  • Decoders: a Chain of decoders, it should have the same size as the desired predictions,
  • Sampling: sampling function,
  • n_gr: number of main-blocks,
  • train_mode: if the model is in training mode or not, the main change is that if off we does not sample, but just take the mean,
  • prediction_layers: indexes of the main-blocks after which we want insert a Decoder to provide a Lagrangian Multipliers Prediction,
  • where_sample: a SamplingPosition to handle different possibilities of Sampling,
  • only_last: a boolean that says if we want only a single Lagragian Multipliers prediction associated to the last main-block,
  • dt: deviation type.

The constructor of this structure have the following

Arguments:

  • HiddenMap: as in the Fields,
  • Graphormers: as in the Fields,
  • Decoders: as in the Fields,
  • Sampling:as in the Fields,
  • where_sample:as in the Fields,
  • prediction_layers: as in the Fields, by default empty, in this case we predict only in the last Graphormers layers,
  • dt: as in the Fields, by default cr_deviation.

This structure is declared as Flux.functor in order to efficiently and automatically implement the back-propagation. It can be called providing as input simply the graph-neural-network (a GNNGraph).

source
LearningPi.GraphormerMethod

Arguments:

  • x: input of the NN model of type Graphormer (a GNNGraph).

Forward computation of a Graphormer m, the output is the concatenation of all the multipliers predicted by the model

source
LearningPi.GraphormerBlockType

Structure that implement the basic main machine-learning block of this .

Fields:

  • Convolution: a Graph Convolutional Neural Network that performs one graph-message passing,
  • MLP: a Multi-Layer-Perceptron that implement the non-linear part in parallel over all the node-hidden-features.

The first constructor takes as input the following

Arguments:

  • hidden_sample: a structure composed by three boolean fields to handle the sampling positions in the model,
  • inpOut: the size of the hidden space,
  • init: initialization for the parameters of the models,
  • rng: random number generator for the sampler, dropout, and all the other random components of the model,
  • pDrop: dropout probability,
  • h_MLP: a vector containing at each component the number of nodes in the associated layer of the hidden multi-layer-perceptron,
  • ConvLayer: convolutional layer, by default is GraphConv,
  • act: activation function for the Multi-Layer-Perceptron, by default is relu,
  • act_conv: activation function for the Graph Convolutional Part, by default is identity,
  • aggr: aggregation function for the Graph Convolutional Part, by default is mean.

The second constructor directly takes as input the Fields of the structure.

source
LearningPi.GraphormerBlockMethod

Arguments:

  • x: a GNNGraph,
  • h: a features matrix associated to the nodes of x.

Computes the forward for the GraphormerBlock. The backward is automatically computed as long as all the operation in the forward are differentiable by Zygote.

source
LearningPi.LayerNormType

Fields:

  • eps: regularization parameter,
  • d: size of the input of the normalization layer.

Describe the Layer Normalization for the provided parameters

source
LearningPi.RMSNormType

Fields:

  • eps: additive regularization parameter,
  • sqrtd: multiplicative regularization parameter.

Describe the RMS Normalization for the provided parameters.

source
LearningPi.SamplerType

Structure that implement the Sampling mechanism from a Gaussian distribution.

Fields:

  • rng: random number generator.

An instantiation of this structure can be used as function.

source
LearningPi.SamplerMethod

Arguments:

  • x a vector (of length even), the first half components are the mean μ and the last hals the standard deviation σ.

The standard deviation is bounded in [-6,2] ... magic numbers. The output is a vector of size half the size of x sampled from a gaussian of mean μ and standard deviation σ.

source
LearningPi.SamplingPositionType

Structure that handle the position in the model where is performed the sample. For the moment only three alternative are available and are encoded in boolean fields.

Fields:

  • outside: if true the sampling is performed in the output space,
  • hidden_state: in all the hidden states between two main blocks,
  • before_decoding: in the hidden space, but only before call the decoder.
source
LearningPi.cr_deviationType

Type to use as deviation vector (i.e. the starting point from which our model produce an additive activation) the dual variables associated to the relaxed constraints in the optimal solution of the Continuous Relaxation.

source
LearningPi.example_gnnType

Structure to encode the examples for the training when we whant use a GNN model.

Fields:

  • instance: an instance,
  • features: the features associated to the instance,
  • gold: the labels,
  • linear_relaxation: the Lagrangian Subproblem value associated to the dual variable of the continuous relaxation.
source
LearningPi.featuresCWLType

Features structure for the Capacitated Warehouse Location instance.

Fields:

-xCR: primal solution of the Linear Relaxation associated to the variables that associate one items to a pack, -yCR: primal solution of the Linear Relaxation associated to the variables say if we use or not a pack, -λ: dual solution of the Linear Relaxation associated to the packing constraints, -μ: dual solution of the Linear Relaxation associated to the packing constraints, -objCR: objective value of the Linear Relaxation, -xLR: primal solution of the Knapsack Lagrangian Relaxation associated to the variables that associate one items to a pack (using the dual variables λ of the linear relaxation), -yLR: primal solution of the Knapsack Lagrangian Relaxation associated to the variables say if we use or not a pack (using the dual variables λ of the linear relaxation), -objLR: objective value of the Knapsack Lagrangian Relaxation (using the dual variables λ of the linear relaxation).

source
LearningPi.featuresGAType

Features structure for the Generalized Assignment instance.

Fields:

-xCR: primal solution of the Linear Relaxation associated to the variables that associate one items to a pack, -λ: dual solution of the Linear Relaxation associated to the packing constraints, -μ: dual solution of the Linear Relaxation associated to the packing constraints, -objCR: objective value of the Linear Relaxation, -xLR: primal solution of the Knapsack Lagrangian Relaxation associated to the variables that associate one items to a pack (using the dual variables λ of the linear relaxation), -objLR: objective value of the Knapsack Lagrangian Relaxation (using the dual variables λ of the linear relaxation).

source
LearningPi.featuresMCNDType

Struct containing the information of the features for an instance.

Fields:

  • xCR: the value of the flow variables for the optimal solution of the linear relaxation,
  • yCR: the value of the decision variables for the optimal solution of the linear relaxation,
  • λ: the value of the dual variables associated to the flow constraints, for the optimal solution of the linear relaxation,
  • μ: the value of the dual variables associated to the capacity constraints for the optimal solution of the linear relaxation,
  • objCR: the objective value of the linear relaxation,
  • xLR: the value of the flow variables for the optimal solution of the sub-problem for the knapsack relaxation, considering as lagrangian multiers the vector λ,
  • yLR: the value of the design variables for the optimal solution of the sub-problem for the knapsack relaxation, considering as lagrangian multiers the vector λ,
  • LRarcs: the objective values, for each edge, of the optimal solution of the sub-problem for the knapsack relaxation, considering as lagrangian multiers the vector λ,
  • objLR: the objective value of the sub-problem for the knapsack relaxation, considering as lagrangian multiers the vector λ,
  • origins: a matrix of size K×V the cost of the shortest path from the origin to the current node with costs in an edge e: ins.r[k,e]+ins.f[e]/ins.c[e],
  • destinations: a matrix of size K×V the cost of the shortest path from the current node to the destination with costs in an edge e: ins.r[k,e]+ins.f[e]/ins.c[e],
  • distance: a matrix of size V×V with the distance in terms of number of edges for the shortest path from each two nodes.
source
LearningPi.labelsType

Struct containing the information relative to the labels

Fields:

  • π: matrix containing the gold Lagrangian multipliers. π[k, i] gives the values of the Lagrangian multiplier associated with demand k and node i,
  • x: solution x of the Lagrangian problem. x[k, a] gives the value of the solution x_a^k of the Lagrangian problem L(π) for demand k and arc a,
  • y: solution y of the Lagrangian problem. y[a] gives the value of the solution y_a of the Lagrangian problem L(π) for arc a,
  • LRarcs: Values of the Lagrangian problem for the arcs. LRarcs[a] gives the value of subproblem L_a associated with arc a,
  • objLR: Value of the Lagrangian dual problem.
source
LearningPi.labelsCWLType

Label structure for the Capacitated Warehouse Location Problem.

Fields:

-`π`: optimal lagrangian multipliers vector,
-`xLR`: primal solution of the Knapsack Lagrangian Relaxation associated to the variables that associate one items to a pack (using the optimal Lagrangian multipliers),
-`yLR`: primal solution of the Knapsack Lagrangian Relaxation associated to the variables say if we use or not a pack (using the optimal Lagrangian multipliers),
-`objLR`: optimal value of the Lagrangian Dual.
source
LearningPi.labelsGAType

Label structure for the Generalized Assignment Problem.

# Fields:
-`π`: optimal lagrangian multipliers vector,
-`xLR`: primal solution of the Lagrangian Subproblem with optimal Lagrangian multipliers,
-`objLR`: optimal value of the Lagrangian Dual.
source
LearningPi.learningBlockGNNType

Abstract type to type the functions that should work with Graph-Neural-Networks. This abstract type was originally tought to be used for the models that use the Block Architecture here implemented. In the current implementation it coincides more or less to learningGNN.

source
LearningPi.learningMLPType

Structure that implement the learning type for a simple Multi-Layer-Perceptron (without Graph Neural Network). The features extraction is a simple manual features extraction and the model predict in parallel one value for each dualized constraints.

source
LearningPi.learningMultiPredSampleType

Struct to easily construct a neural network architecture similar to learningSampleTransformer, but predict multiple deviation using different decoders enbedded at the end of given main-blocks.

source
LearningPi.learningSampleGasseType

Struct to easily construct a neural network architecture inspired from:

Gasse, M., Chételat, D., Ferroni, N., Charlin, L., and Lodi, A. Exact Combinatorial Optimization with Graph Convolutional Neural Networks. In Wallach, H., Larochelle, H., Beygelzimer, A., Alché-Buc, F. d., Fox, E., and Garnett,R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.

Subtype of learningBlockGNN.

source
LearningPi.learningSampleNairType

Struct to easily construct a neural network architecture inspired from:

Nair, V., Bartunov, S., Gimeno, F., von Glehn, I., Lichocki, P., Lobov, I., O’Donoghue, B., Sonnerat, N., Tjandraatmadja, C., Wang, P., Addanki, R., Hapuarachchi, T., Keck, T., Keeling, J., Kohli, P., Ktena, I., Li, Y., Vinyals, O., and Zwols, Y. Solving mixed integer programs using neural networks. CoRR, abs/2012.13349, 2020.

Subtype of learningBlockGNN.

source
LearningPi.learningSampleOutsideType

Struct to easily construct a neural network architecture similar to learningSampleTransformer, but that perform instead the sampling in the lagrangian multipliers output space.

source
LearningPi.learningSampleTransformerType

Struct to easily construct a neural network architecture presented in:

F. Demelas, J. Le Roux, M. Lacroix, A. Parmentier "Predicting Lagrangian Multipliers for Mixed Integer Linear Programs", ICML 2024.

Subtype of learningBlockGNN.

source
LearningPi.learningTransformerType

Struct to easily construct a neural network architecture similar to learningSampleTransformer, but it does not perform sampling at all.

source
LearningPi.loss_GAPType

Structure that realize a GAP loss. This structure can be used as function.

Fields:

  • lr: a lagranian sub-problem loss of type loss_LR.

The constructor need no paramameters.

source
LearningPi.loss_GAPMethod

Arguments:

  • π: a Lagrangian Multipliers Vector,
  • example: an abstract example.

Computes the value of the GAP loss.

source
LearningPi.loss_GAP_closureType

Structure that realize a GAP closure loss. This structure can be used as function.

Fields:

  • lr: a lagranian sub-problem loss of type loss_LR.

The constructor need no paramameters.

source
LearningPi.loss_LRType

Structure that realize a LR (CPU) loss. This structure can be used as function. The constructor need no paramameters.

source
LearningPi.loss_LRMethod

Arguments:

  • π: a Lagrangian Multipliers Vector,
  • example: an abstract example.

Computes the value of the LR (CPU) loss.

source
LearningPi.loss_LR_gpuType

Structure that realize a LR gpu loss. This structure can be used as function. The constructor need no paramameters.

source
LearningPi.loss_LR_gpuMethod

Arguments:

  • π: a Lagrangian Multipliers Vector,
  • example: an abstract example.

Computes the value of the LR GPU loss.

source
LearningPi.loss_hingeType

Structure of parameters for loss obtained as the inverse of the sub-problem obj value.

Fields:

-α: regularization term. Warning: for the moment this parameter not used!

source
LearningPi.loss_hingeMethod

Arguments:

  • π: lagrangian multipliers vector candidate,
  • example: dataset sample object.

Computes the value of the Hinge loss.

source
LearningPi.loss_mseType

Structure that realize a MSE loss. This structure can be used as function. The constructor need no paramameters.

source
LearningPi.loss_mseMethod

Arguments:

  • π: lagrangian multipliers vector candidate,
  • example: dataset sample object,

-_: loss parameters, it should be a structure of type MSELoss.

Returns the loss function value obtained taking the MSE beteern the predicted Lagrangian multipliers π and the optimal ones in example.

source
LearningPi.loss_multi_LRType

Structure that realize a multi-prediction LR (CPU) loss. This structure can be used as function.

Fields:

  • α: a penalization parameter to weight the different predictions, by default is 0.5,
  • lr: a loss of type loss_LR, automatically constructed.
source
LearningPi.loss_multi_LRMethod

Arguments:

  • π: a Lagrangian Multipliers Vector,
  • example: an abstract example.

Computes the value of the multi-prediction LR (CPU) loss.

source
LearningPi.zero_deviationType

Type to use as deviation vector (i.e. the starting point from which our model produce an additive activation) the all zeros vector.

source
ChainRulesCore.rruleMethod

Compute the value of the Learning by Experience loss (usining the inverse of value of the sub-problem) and its pullback function.

source
ChainRulesCore.rruleMethod

Compute the value of the Learning by Experience loss (usining the inverse of value of the sub-problem) and its pullback function.

source
Flux.cpuMethod

Arguments:

  • m: a Graphormer model.

Extends the cpu function of Flux to be applied to Graphormer model.

source
Flux.gpuMethod

Arguments:

  • m: a Graphormer model.

Extends the gpu function of Flux to be applied to Graphormer model.

source
LearningPi.LM_signMethod

Arguments:

-`x`: an unsigned Lagrangian multipliers vector,
-`ins`: an instances (of type `instanceGA`).

Return the Lagrangian Multipliers -softplus(x) as for the way in which is encoded GA we have non-positive Lagrangian multipliers.

source
LearningPi.LM_signMethod

Arguments:

-`x`: an unsigned Lagrangian multipliers vector,
-`ins`: an instances (of type `abstractInstanceMCND`).

Return the Lagrangian Multipliers x as for MCND we have no sign constraint..

source
LearningPi.LM_signMethod

Arguments:

-`x`: an unsigned Lagrangian multipliers vector,
-`ins`: an instances (of type `instanceCWL`).

Return the Lagrangian Multipliers x as for the way in which is encoded CWL we have no-sign constraints for the Lagrangian multipliers.

source
LearningPi.adj_var_constrMethod

Arguments:

-`ins`: an instances (of type `abstractInstanceGA`),

return the adjaciency matrix associated to the dualized constrants and the variables nodes in the bipartite graph representation. The component associated to a couple (constraint,variable) is equal to 1 if and only if the variable has non-null coefficient in constraint. Otherwise is zero.

source
LearningPi.adj_var_constrMethod

Arguments:

-ins: an instances (of type abstractInstanceMCND).

Return the adjaciency matrix associated to the dualized constrants and the variables nodes in the bipartite graph representation. The component associated to a couple (constraint,variable) is equal to 1 if and only if the variable has non-null coefficient in constraint. Otherwise is zero.

source
LearningPi.adj_var_constrMethod

Arguments:

-`ins`: an instances (of type `instanceCWL`).

return the adjaciency matrix associated to the dualized constrants and the variables nodes in the bipartite graph representation. The component associated to a couple (constraint,variable) is equal to 1 if and only if the variable has non-null coefficient in constraint. Otherwise is zero.

source
LearningPi.aggregate_featuresMethod

Arguments:

  • ins: instance structure,
  • varFeatures: features matrix for the variables of the problem,
  • G: adjaciency matrix that have a one in the position for the couple (variable, constraint) if and only if the variable is used in the constraint.

Returns the features associated to the dualized constraint of the instance ins obtained by an aggregation of varFeatures with respect to the neighbourhoods induced by the adjaciency matrix G.

source
LearningPi.compareWithBestsMethod

Arguments:

  • currentMetrics : a dictionary of Float,
  • bestMetrics : a dictionary of Float,
  • nn : a neural network,
  • endString : a string used to memorize the best models.

This function compare all the values in bestMetrics with the ones in currentMetrics (that corresponds to the same key). If some value in currentMetrics is better, then we update the correspondent value in bestMetrics and we save the model in a bson file.

source
LearningPi.create_featuresMethod

Arguments:

	- `ins`: instance object, it should be of type instanceGA.

read the features and returns a features structure.
source
LearningPi.create_featuresMethod

Arguments:

  • ins: instance structure, should be of type cpuInstanceMCND.

Create and return as output a features structure for the MCND instance ins.

source
LearningPi.create_featuresMethod

Arguments:

- `ins`: instance object, it should be of type `instanceCWL`. 

Solves the Continuous Relaxation and the Lagrangian Sub-Problem considering as Lagrangian Multipliers
the dual variables associated to the relaxed constraints and then returns a features structure.
source
LearningPi.create_model_gasseFunction

Arguments:

  • where_sample: a structure composed by three booleans to say where perform sampling,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • nBlocks: number of main-blocks that compose the core part of the encoder,
  • nNodes: dimension of the hidden-space for the features representation,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • act: activation function for the parallel MLP, by default relu,
  • act_conv: activation function for the Graph-Convolutional Layers, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [100,250,500],
  • hH: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder, by default [500] ,
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • pDrop: drop-out parameter, by default 0.001 (unused in this implementation, will be removed soon),
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • aggr: the aggregation function, by default mean,
  • prediction_layers: a vector that contains the indexes of the layers in which we want perform a prediction, by default [], in this case we use the decoder only in the last main-block of the Graphormer.

Return a model as defined in Graphormer.jl using the provided hyper-parameters.

source
LearningPi.create_model_nairFunction

Arguments:

  • where_sample: a structure composed by three booleans to say where perform sampling,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • nBlocks: number of main-blocks that compose the core part of the encoder,
  • nNodes: dimension of the hidden-space for the features representation,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • act: activation function for the parallel MLP, by default relu,
  • act_conv: activation function for the Graph-Convolutional Layers, by default relu (unused in this implementation, will be removed soon),
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [100,250,500],
  • hH: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder, by default [500] ,
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • pDrop: drop-out parameter, by default 0.001 (unused in this implementation, will be removed soon),
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true;
  • aggr: the aggregation function, by default mean,
  • prediction_layers: a vector that contains the indexes of the layers in which we want perform a prediction, by default [], in this case we use the decoder only in the last main-block of the Graphormer.

returns a model as defined in Graphormer.jl using the provided hyper-parameters.

source
LearningPi.deviationFromMethod

Arguments:

  • x: the bipartite-graph representation of the instance.

For the cr_deviation it returns the dual variables associated to the dualized constraints in the optimal solution of the continuous relaxation, taking the good components from the nodes features matrix in the bipartite-graph representation.

source
LearningPi.deviationFromMethod

Arguments:

  • x: the bipartite-graph representation of the instance.

For the zero_deviation it returns an all-zeros vector with the correct size. The size will be the same as the dual variables associated to the dualized constraints in the optimal solution of the continuous relaxation, taking the good components from the nodes features matrix in the bipartite-graph representation.

source
LearningPi.features_matrixMethod

Arguments:

  • ins: instance structure, it should be a sub-type of instanceGA,
  • featObj: features object containing all the characteristics,
  • fmt: features matrix type.

Construct the matrix of the features for a bipartite-graph representation of the instance.

source
LearningPi.features_matrixMethod

Arguments:

  • ins: instance structure, it should be a sub-type of abstractInstanceMCND,
  • featObj: features object containing all the characteristics,
  • fmt: features matrix type.

Construct the matrix of the features for a bipartite-graph representation of the instance.

source
LearningPi.features_matrixMethod

Arguments:

  • ins: instance structure, it should be a sub-type of instanceCWL,
  • featObj: features object containing all the characteristics,
  • fmt: features matrix type.

Construct the matrix of the features for a bipartite-graph representation of the instance.

source
LearningPi.features_variablesMethod

Arguments:

-ins: an instances (of type abstractInstanceGA),

  • featObj: features encoded in an apposite structure,
  • G: adjaciency matrix that have a one in the position for the couple (variable, constraint) if and only if the variable is used in the constraint.

Returns the features associated to the variables in ins using the structure of the instance and the featObj.

source
LearningPi.features_variablesMethod

Arguments:

  • ins: instance structure (of type abstractInstanceMCND),
  • featObj: features encoded in an apposite structure,
  • G: adjaciency matrix that have a one in the position for the couple (variable, constraint) if and only if the variable is used in the constraint.

Returns the features associated to the variables in ins using the structure of the instance and the featObj.

source
LearningPi.features_variablesMethod

Arguments:

-ins: an instances (of type instanceCWL),

  • featObj: features encoded in an apposite structure,
  • G: adjaciency matrix that have a one in the position for the couple (variable, constraint) if and only if the variable is used in the constraint.

Returns the features associated to the variables in ins using the structure of the instance and the featObj.

source
LearningPi.forwardBackwardFunction

Arguments:

  • loss: a structure that contains the parameters α and β of the loss,
  • trainSet: the training dataset structure,
  • nn: the neural network model,
  • currentMetrics: a dictionary of Float,
  • opt: the optimizer used for the training,
  • loss: the loss function,
  • epoch: the current epoch,
  • lt: learning type object,
  • dt: deviation type (0 or duals of the continuous relaxation).

This function performs the forward-backward pass for the training considering a generic loss and a generic learning type.

source
LearningPi.forwardBackwardMethod

Arguments:

  • trainSet: the (training) set,
  • nn: a model of type Graphormer,
  • currentMetrics: a dictionary that contains the metrix of the current iteration,
  • opt: an Optimiser,
  • loss: loss function,
  • epoch: the epcoh counter (this patameter is unsued in the current implementation and it will be soon removed),
  • lt: learning type (this patameter is unsued in the current implementation and it will be soon removed),
  • dt: deviation type (this patameter is unsued in the current implementation and it will be soon removed).

This function performs the forward and backward pass for the model nn over all the (training) set trainSet.

source
LearningPi.gapMethod

Arguments:

  • example: the current example (dataset point),
  • objPred: the current obective for the example,
  • objGold: the optimal value of the Lagrangian Dual,
  • nInst: the number of the instances in the set.

Computes the GAP of the instance in the example using the predicted objective objPred.

source
LearningPi.gap_closureMethod

Arguments:

  • example: the current example (dataset point),
  • objPred: the current obective for the example,
  • objGold: the optimal value of the Lagrangian Dual,
  • nInst: the number of the instances in the set.

Computes the closure GAP of the instance in the example using the predicted objective objPred. The closure is w.r.t. the value of the Lagrangian Sub-Problem, solved with the dual variables of the continuous relaxation.

source
LearningPi.get_cr_featuresMethod

Arguments:

-fmt: feature matrix type (it should be without_cr_features_matrix).

in this case we have no CR features and it returns an empty vector.

source
LearningPi.get_deviceMethod

Arguments:

- `_`: the loss parameters.

returns the device (cpu/gpu) used to compute the loss.
For a general loss will be CPU.
source
LearningPi.get_deviceMethod

Arguments:

  • _: loss function.

returns the device to use with this loss. implementationn this case GPU.

source
LearningPi.get_modelMethod

Arguments:

-`nn`: neural network model of type `Graphormer`.

Returns a cpu version of the model that can be saved using a bson file.

source
LearningPi.get_parametersFunction

Arguments:

  • nn : neural network,
  • lt : learning type,
  • f : useless parameter, removed soon.

Returns the model parameters of nn in the case in which nn belsong to lt learning type.

source
LearningPi.get_parametersMethod

Arguments:

  • nn: a model, sub-type of learningType.

implementation for the function that. The general rule is that nn is a model and we can directly call the function Flux.params This abstract implementation will be soon removed.

source
LearningPi.get_λMethod

Arguments:

  • x: the bipartite-graph representation of the instance.

Returns the dual variables associated to the dualized constraints in the optimal solution of the continuous relaxation, taking the good components from the nodes features matrix in the bipartite-graph representation.

source
LearningPi.get_λMethod

Arguments:

  • x: a GPU features vector.

Return the dual variables of the CR associated to the dualized constraints.

source
LearningPi.gradient_lspMethod

Arguments:

- `x`: the solution of the Lagrangian Sub-problem,
- `ins`: a cpuInstanceCWL structure.

This function compute and returns the gradient of the sub-problem objective function w.r.t. the Lagrangian Multipliers.

source
LearningPi.gradient_lspMethod

Arguments:

- `x`: the solution of the Lagrangian Sub-problem,
- `ins`: a cpuInstanceCWL structure.

This function compute and returns the gradient of the sub-problem objective function w.r.t. the Lagrangian Multipliers.

source
LearningPi.gradient_lspMethod

Arguments:

- `x`: the solution of the Lagrangian Sub-problem,
- `ins`: a cpuInstanceCWL structure.

This function compute and returns the gradient of the sub-problem objective function w.r.t. the Lagrangian Multipliers.

source
LearningPi.gradient_lspMethod

Arguments:

- `x`: the solution of the Lagrangian Sub-problem,
- `ins`: a cpuInstanceMCND structure.

This function compute and returns the gradient of the sub-problem objective function w.r.t. the Lagrangian Multipliers.

source
LearningPi.load_modelMethod

Arguments:

-`nn`: neural network model,
-`lt`: learning type (of type `learningMLP`).

In this case only returns the model nn.

source
LearningPi.load_modelMethod

Arguments:

-`nn`: neural network model of type `Graphormer`,
-`lt`: learning type (of type `learningGNN`).

In this case only returns the model nn.

source
LearningPi.preprocess_weightMethod

Arguments:

-fmt: feature matrix type (it should be without_cr_features_matrix).

Preprocess the edge weights of the bipartite graph representation of the instance. One edges correspond to a pair (variable, constraint). In this project are for the moment implemented three choices: - all ones weights, - weights equal to the coefficients of the variable in the constraints, - a modification of the last to assure positive weights.

source
LearningPi.printBestsMethod

Arguments:

  • bestMetrics: a dictionary of float that contains the best values
		   find in the training for altypel the considered metrics,
  • path: location where print the results in a file.

Takes as input the dictionary of the best metrics and print the values in standard output and in a file defined by the path.

source
LearningPi.printMetricsMethod

Arguments:

  • currentMetrics: a dictionary of Float.

This function takes as input the dictionary of the metrics and print the values associated to training and validation sets.

source
LearningPi.print_best_modelsMethod

Arguments:

  • endString: Path where save the models,
  • bestModels: Dictionary of the best models (w.r.t different metrics) found so far.

Print in a file BSON, located in the folder endString the best model found so far.

source
LearningPi.print_jsonMethod

Arguments:

  • ins: instance structure, it should be of type cpuMCNDinstance,
  • lab: labels structure, it should be of type labelsMCND,
  • feat: features structure, it should be of type featuresMCND,
  • fileName: the path to the file json where print the data,
  • factory: instance factory should be of type cpuMCNDinstanceFactory.
source
LearningPi.print_jsonMethod

Arguments:

  • ins: instance structure, it should be of type <: instanceGA,
  • lab: labels structure, it should be of type labelsGA,
  • feat: features structure, it should be of type featuresGA,
  • fileName: the path to the file json where print the data.

Print in a JSON format the information contained in ins,feat and lab, in a file in fileName.

source
LearningPi.print_jsonMethod

Arguments:

-ins: instance structure, it should be of type of instanceCWL,

  • lab: labels structure, it should be of type labelsCWL,
  • feat: features structure, it should be of type featuresCWL,
  • fileName: the path to the file json where print the data.

Print the information provided in the instance ins, the labels lab and the features feat in a JSON file located in the path fileName.

source
LearningPi.read_labelsMethod

Arguments:

	- `fileLabel`: the path to the file where to find labels informations,
	- `ins`: instance object, it should be of type `instanceGA`. 

Reads the labels and returns a labels structure.
source
LearningPi.read_labelsMethod
# Arguments:
	- `fileLabel`: the path to the file where to find labels informations,
	- `ins`: instance object, it should be of type abstractInstanceMCND. 

Read the labels and returns a labels structure.
source
LearningPi.read_labelsMethod

Arguments:

	- `fileLabel`: the path to the file where to find labels informations
	- `ins`: instance object, it should be of type sub-type of `instanceCWL` 

Reads the labels and returns a labels structure.
source
LearningPi.rhsMethod

Arguments:

-`ins`: an instances (of type `abstractInstanceGA`),
- `k`: useless parameter, only for signature,
- `i`: bin index.

Return the right hand side of the dualized constraint associated to the bin i in ins.

source
LearningPi.rhsMethod

Arguments:

-`ins`: an instances (of type `abstractInstanceMCND`),
- `k`: commodity index,
- `i`: vertex index.

Return the right hand side of the dualized constraint associated to the commodity k and the vertex i in ins.

source
LearningPi.rhsMethod

Arguments:

-`ins`: an instances (of type `instanceCWL`),
- `k`: useless parameter, only for signature,
- `i`: warehouse index.

Return the right hand side of the dualized constraint associated to the warehouse i in ins.

source
LearningPi.saveHPMethod

Arguments:

  • endString: a string used as name for the output file,
  • lr: learning rate of the algorithm,
  • decay: decay for the learning rate,
  • h: a list of #(hidden layers), each component of h contains the number of nodes in
 the associated hidden layer,
  • opt: optimizer,
  • lt: learning type object,
  • fmt: features matrix type,
  • dt: deviation type,
  • loss: loss function,
  • seedDS: random seed for the dataset generation,
  • seedNN: random seed for the neural network parameters,
  • stepSize: the step size for the decay scheduler of the optimizer,
  • nodes_number: size (number of nodes) in the hidden reprensentation between each layer,
  • block_number: number of blocks in the model,
  • hI: sizes of Dense layers in the first part, where the nodes features are sent in the hidden space,
  • hF: sizes of Dense layers in the final,
  • dataPath: path to the instances used in the dataset,
  • factory: instance factory type.

This function memorize all this hyper parameters in a JSON file.

source
LearningPi.saveHPMethod

Arguments:

  • endString: a string used as name for the output file,
  • lr: learning rate of the algorithm,
  • decay: decay for the learning rate,
  • h: a list of #(hidden layers), each component of h contains the number of nodes in
 the associated hidden layer,
  • opt: optimizer,
  • lt: learning type object,
  • loss: loss function,
  • seedDS: random seed for the dataset generation,
  • seedNN: random seed for the neural network parameters,
  • stepSize: the step size for the decay scheduler of the optimizer.

This function memorize all this hyper parameters in a JSON file.

source
LearningPi.sizeFeaturesMethod

Arguments:

  • lt: learning type, it should be a sub-type of learningGNN,
  • dS: a dataset.

Return the size of the features matrix.

source
LearningPi.size_features_variableMethod

Arguments:

-fmt: feature matrix type (it should be cr_features_matrix).

returns the size of the features associated to the variables. In this case 4.

source
LearningPi.size_features_variableMethod

Arguments:

-fmt: feature matrix type (it should be cr_features_matrix).

returns the size of the features associated to the variables. In this case 4.

source
LearningPi.size_features_variableMethod

Arguments:

-fmt: feature matrix type (it should be cr_features_matrix).

returns the size of the features associated to the variables. In this case 2.

source
LearningPi.sub_problem_valueMethod

Arguments:

  • _: lagrangian multipliers vector candidate,
  • v: the value of the loss function,
  • example: dataset sample object,
  • _: loss parameters.

Compute the value of the sub-problem for the loss for which it cannot be obtained in a smarter way.

source
LearningPi.sub_problem_valueMethod

Arguments:

  • _: lagrangian multipliers (are not used in this implementation),
  • v: loss function value,
  • example: an abstract_example,
  • _: loss function.

Compute the sub-problem value without solving the Lagrangian Sub-Problem, if it is already solved during the computation of the loss.

source
LearningPi.sub_problem_valueMethod

Arguments:

  • _: lagrangian multipliers (are not used in this implementation),
  • v: loss function value,
  • example: an abstract_example,
  • _: loss function .

Compute the sub-problem value without solving the Lagrangian Sub-Problem, if it is already solved during the computation of the loss.

source
LearningPi.sub_problem_valueMethod

Arguments:

  • _: lagrangian multipliers (are not used in this implementation),
  • v: loss function value,
  • _: an abstract_example,
  • _: loss function.

Compute the sub-problem value without solving the Lagrangian Sub-Problem, if it is already solved during the computation of the loss.

source
LearningPi.sub_problem_valueMethod

Arguments:

  • _: lagrangian multipliers vector candidate,
  • v: the value of the loss function,
  • example: dataset sample object,
  • _: loss parameters, it should be a structure of type HingeLoss.

Compute the value of the sub-problem without recomputing it, but using the value of the loss function (for the HingeLoss) and other informations contained in the sample.

source
LearningPi.sub_problem_valueMethod

Arguments:

  • _: lagrangian multipliers (are not used in this implementation),
  • v: loss function value,
  • example: an abstract_example,
  • _: loss function.

Compute the sub-problem value without solving the Lagrangian Sub-Problem, if it is already solved during the computation of the loss.

source
LearningPi.testAndPrintMethod

function testAndPrint(currentMetrics::Dict,testSet,nn,loss,loss,lt::learningType)

Arguments:

  • currentMetrics: a dictionary of Floats that contains several metric for the current epoch,
  • testSet : a vector of example that correspond to the test set,
  • nn : a neural network model.
  • loss: a structure that encode the loss function.
  • lt: learning type object.

This function compute different metrics over the validation set. The values are memorized in the dictionary and print them in the standard output.

source
LearningPi.validationMethod

Arguments:

  • currentMetrics: a dictionary of Float,
  • valSet : a vector of gnn_dataset that correspond to the validation set,
  • nn : a neural network model,
  • loss: a structure with the parameters of the loss,
	   For other details of the parameters of a certain loss see the definition of the particular structure of the loss,
  • loss: loss function,
  • lt: learning type object,
  • dt: deviation type (0 or dual of the CR).

This function compute different metrics over the validation set. The values are memorized in the dictionary.

source