Public API for LearningPi.jl

LearningPi.createCorpusFunction

Arguments:

  • featType : the type of the features instance,
  • folder : the path to the directory that contains the json files that defines the instances (and the associated features and labels),
  • maxInstance : a vector with three components, that say how many instances take for the training/validation/test set,
  • seed : a random seed used to select which instaces consider in the training/validation/test sets,
  • factory: type of instance,
  • pTrain : The percentage of training instances in the provided folder,
  • pVal : The percentage of validation instances in the provided folder.

Create a Corpus, that is a structure with three DataSet field for training,validation and test dataset. Note: The percentage for the test set will be 1-pTrain-pVal. It is important to select the percentage of training and validation in such a way that pTrain+pVal<1 and both will be non-negative.

source
LearningPi.createDataSetFunction

Arguments:

  • lt: the learning type.
  • directory: a list of paths to the instances.
  • maxInstance: the maximum number of instances that we wat consider in the provided directory. By default is equal to -1, that means consider all the instances in the directory.
  • factory: type of instance, the possibilities in this moment are cpuMCNDinstanceFactory() (that is Multi-Commodity Network-Design instances) and cpuCWLinstanceFactory() (for the Bin Packing instances).

Create the dataset for the provided (general) learning type. return a dataSet structure of a proper type.

source
LearningPi.createDataSetFunction

Arguments:

	- `lt`:learning type, it should be a sub-type of `learningGNN`, 
	- `directory`: the path to the directory containing instances,
	- `maxInstance`: maximum instance number, 
	- `factory`: instance factory, generic sub-type of `abstractInstanceFactory`.

Create and return a dataset for the provided learning type `lt`, considering `maxInst` instances of the factory `factory`, contained in `directory`
source
LearningPi.createKfoldFunction

Arguments:

  • featType : the type of the features instance,
  • folder : the path to the directory that contains the json files that defines the instances (and the associated features and labels),
  • maxInstance : a vector with three components, that say how many instances take for the training/validation/test set,
  • seed : a random seed used to select which instaces consider in the training/validation/test sets,
  • factory: type of instance,
  • k: the fold that we want select as test set. Note: 1 <= k <= 10.

Create a Corpus, that is a struct with three DataSet field for training/validation/test set.

source
LearningPi.createLabelsMethod

Arguments:

  • π: a (optimal) Lagrangian multipliers vector
  • x: the flow variables in the Lagrangian sub-problem, obtained afer the resolution of the sub-problem with multipliers π,
  • y: the design variables in the Lagrangian sub-problem, obtained afer the resolution of the sub-problem with multipliers π,
  • LRarcs: a vector containign the bounds for each edge of the Lagrangian sub-problem considering π as Lagrangian multipleirs vector,
  • objLR: the bound of the Lagrangian sub-problem considering π as Lagrangian multipleirs vector,
  • ins: the instance structure (standard instance formulation, without regularization).
Return a proper label structure.
source
LearningPi.createLabelsMethod

Arguments:

-π: optimal lagrangian multipliers vector, -x: primal solution of the Knapsack Lagrangian Relaxation associated to the variables that associate one items to a pack (using the optimal Lagrangian multipliers), -y: primal solution of the Knapsack Lagrangian Relaxation associated to the variables say if we use or not a pack (using the optimal Lagrangian multipliers), -objLR: optimal value of the Lagrangian Dual,

  • ins: instance object, it should be of type sub-type of instanceCWL.

Given all the fields construct a label structure for the Capacitated Warehouse Location Problem.

source
LearningPi.createLabelsMethod

Arguments:

-`π`: optimal lagrangian multipliers vector,
-`x`: primal solution of the Knapsack Lagrangian Relaxation associated to the variables that associate one items to a pack (using the optimal Lagrangian multipliers),
-`objLR`: optimal value of the Lagrangian Dual,
- `ins`: instance object, it should be of type `instanceGA`.

Given all the fields construct a label structure for the Generalized Assignment Problem.

source
LearningPi.create_exampleMethod

Arguments:

  • lt: learning Type, this function works for all the learning types that use a graph representation of the instance,
  • fileName: path to the json that contains all the information to construct the learning sample starting from an instance, its features and the labels,
  • factory: instance factory, it works for all the factory.

Returns an gnnExample_instance with all the information useful for the training.

source
LearningPi.create_exampleMethod

Arguments:

  • lt: learning type, it should be learningMLP,
  • fileName: the name of the file json that contains the informations about the instance, its features and its labels,
  • factory: type of instance (it handle both with the normalized and un-normalized instances).

Create a structure containing the instance, the extracted features and the associated labels.

source
LearningPi.create_lossFunction

Arguments:

-:: loss parameters, it should be a structure of type HingeLoss.

return the loss correspondent to loss paramameters of type HingeLoss.

source
LearningPi.create_lossMethod

Arguments:

  • _: a factory, for this implentation it should be of type ~loss_GAP_closure_factory.

Return a loss corresponding to the factory.

source
LearningPi.create_lossMethod

Arguments:

  • _: a factory, for this implentation it should be of type ~loss_GAP_factory.

Return a loss corresponding to the factory.

source
LearningPi.create_lossMethod

Arguments:

  • _: a factory, for this implentation it should be of type ~loss_LR_factory.

Return a loss corresponding to the factory.

source
LearningPi.create_lossMethod

Arguments:

  • _: a factory, for this implentation it should be of type ~loss_LR_gpu_factory.

Return a loss corresponding to the factory.

source
LearningPi.create_lossMethod

Arguments:

  • _: a factory, for this implentation it should be of type ~loss_mse_factory.

Return a loss corresponding to the factory.

source
LearningPi.create_modelFunction

Arguments:

  • lType: learning type, should be learningTransformer,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • h: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • a: activation function, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [500, 250, 100],
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • block_number: number of main-blocks that compose the core part of the encoder, by default 5,
  • nodes_number: dimension of the hidden-space for the features representation, by default 500,
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • final_A: final activation function (in the space of Lagrangian multipliers, but before deviation), by default identity.

Returns the neural network model for learningTransformer and the other provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • lType: learning type, should be learningSampleGasse,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • h: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • a: activation function, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [500, 250, 100],
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • block_number: number of main-blocks that compose the core part of the encoder, by default 5,
  • nodes_number: dimension of the hidden-space for the features representation, by default 500,
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • final_A: final activation function (in the space of Lagrangian multipliers, but before deviation), by default identity.

Returns the neural network model for learningSampleGasse and the other provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • where_sample: a structure composed by three booleans to say where perform sampling,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • nBlocks: number of main-blocks that compose the core part of the encoder,
  • nNodes: dimension of the hidden-space for the features representation,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • act: activation function for the parallel MLP, by default relu,
  • act_conv: activation function for the Graph-Convolutional Layers, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [100,250,500],
  • hH: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder, by default [500] ,
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • aggr: the aggregation function, by default mean,
  • prediction_layers: a vector that contains the indexes of the layers in which we want perform a prediction, by default [], in this case we use the decoder only in the last main-block of the Graphormer.

returns a model as defined in Graphormer.jl using the provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • lType: learning type, should be learningMultiPredSample,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • h: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • a: activation function, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [500, 250, 100],
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • block_number: number of main-blocks that compose the core part of the encoder, by default 5,
  • nodes_number: dimension of the hidden-space for the features representation, by default 500,
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • final_A: final activation function (in the space of Lagrangian multipliers, but before deviation), by default identity.

Returns the neural network model for learningMultiPredSample and the other provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • lType: learning type, should be learningMultiPredTransformer,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • h: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • a: activation function, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [500, 250, 100],
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • block_number: number of main-blocks that compose the core part of the encoder, by default 5,
  • nodes_number: dimension of the hidden-space for the features representation, by default 500,
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • final_A: final activation function (in the space of Lagrangian multipliers, but before deviation), by default identity.

Returns the neural network model for learningMultiPredTransformer and the other provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • lType: learning type, should be learningSampleTransformer,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • h: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • a: activation function, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [500, 250, 100],
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • block_number: number of main-blocks that compose the core part of the encoder, by default 5,
  • nodes_number: dimension of the hidden-space for the features representation, by default 500,
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • final_A: final activation function (in the space of Lagrangian multipliers, but before deviation), by default identity.

Returns the neural network model for learningSampleTransformer and the other provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • lType: learning type, should be learningSampleOutside,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • h: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • a: activation function, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [500, 250, 100],
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • block_number: number of main-blocks that compose the core part of the encoder, by default 5,
  • nodes_number: dimension of the hidden-space for the features representation, by default 500,
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • final_A: final activation function (in the space of Lagrangian multipliers, but before deviation), by default identity.

Returns the neural network model for learningSampleOutside and the other provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • lType: learning type, should be learningSampleNair,
  • in: size of the input of the neural network, for each node in the bipartite graph-representation,
  • h: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,
  • out: the dimention of the output of the neural network model, for each dualized constraint, by default 1,
  • a: activation function, by default relu,
  • seed: random generation seed, by default 1,
  • hI: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default [500, 250, 100],
  • hF: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default [500, 250, 100],
  • block_number: number of main-blocks that compose the core part of the encoder, by default 5,
  • nodes_number: dimension of the hidden-space for the features representation, by default 500,
  • pDrop: drop-out parameter, by default 0.001,
  • dt : deviation type, by default cr_deviation(),
  • std: standard deviation used for the initialization of the nn parameters, by default 0.00001,
  • norm: a boolean to say if normalize or not during the GNN message passing, by default true,
  • final_A: final activation function (in the space of Lagrangian multipliers, but before deviation), by default identity.

Returns the neural network model for learningSampleNair and the other provided hyper-parameters.

source
LearningPi.create_modelFunction

Arguments:

  • lType: general learningType,
  • in: size of the input layer,
  • h: a vector with the same length as the desired number of hidden layers and each component say how many nodes we want in the correspondent hidden layer,
  • out: size of the output layer, by default is equal to one,
  • a: the activation function for the hidden layers, by default is relu.

This function creates a model for the provided learning type (in order to use this variant it should be learningArc). The model is a multi layer perceptron with in nodes in the first layer, length(h) hidden layers (the i-th layer has h[i] nodes) and out nodes in the output layer (by default 1). By default each hidden layer use a relu activation function (the input and output layers have no activation function).

source
LearningPi.dataLoaderMethod

Arguments

- `fileName` : a path to a json file that contains the data for the instance, features and labels
- `factory` : an instance factory for the Generalized Assignment problem

This function reads the instance, the features and the labels from the json and returns three structures that contains all the informations.

source
LearningPi.dataLoaderMethod

Arguments

- `fileName` : a path to a json file that contains the data for the instance, features and labels,
- `factory` : an instance object (for the same instance as the features file).

It reads the instance, the features and the labels from the json and returns three structures that contains all the informations.

source
LearningPi.dataLoaderMethod

Arguments

- `fileName` : a path to a json file that contains the data for the instance, features and labels,
- `factory` : an instance factory for the Capacitated Warehouse Location problem.

It reads the instance, the features and the labels from the json located in fileName and returns three structures that contains all the informations.

source
LearningPi.featuresExtractionMethod

Arguments:

  • featType features type,
  • features a features matrix,
  • nbFeatures the number of features.

Vectorization function for the features when we consider a learningNodeDemand encoding.

source
LearningPi.featuresExtractionMethod

Arguments:

  • lt: learnign type, it should be a sub-type of learningType,
  • featObj: features object containing all the characteristics,
  • ins: instance structure, it should instanceGA,
  • fmt: features matrix type.

Returns the bipartite graph representation with the associated nodes-features matrix.

source
LearningPi.featuresExtractionMethod

Arguments:

  • lt: learnign type, it should be a sub-type of learningGNN,
  • featObj: features object containing all the characteristics,
  • ins: instance structure, it should be a sub-type of abstractInstanceMCND.

Returns the bipartite graph representation with the associated nodes-features matrix.

source
LearningPi.featuresExtractionMethod

Arguments:

  • lt: learnign type, it should be a sub-type of learningType,
  • featObj: features object containing all the characteristics,
  • ins: instance structure, it should instanceCWL,
  • fmt: features matrix type.

Returns the bipartite graph representation with the associated nodes-features matrix.

source
LearningPi.predictionMethod

Arguments:

-nn: neural network model, -f: features matrix, -ins: structure containing the instance informations, -lt: learning type (general).

provide the predicted Lagrangian multipliers.

source
LearningPi.trainMethod

Arguments:

  • maxEp: the maximum number of epochs for the learning algorithm,
  • dS: the Corpus structure that contains the training, validation and test sets,
  • nn: the neural network model,
  • opt: the optimizer used for the training,
  • loss: a structure that contains the parameters α and β of the loss,
  • printEpoch: the number of epochs in which print the metrics of the training,
  • endString: the string used to memorize the output files as best models and tensorboard logs,
  • dt: deviation type, it could deviate from zero or the duals of the continuous relaxation,
  • lt: learning type,
  • seed: random seed for the random generators,
  • bs batch size.

This function performs the learning with the provided inputs and save the best models in a bson file.

source