Public API for LearningPi.jl
LearningPi.createCorpus
— FunctionArguments:
featType
: the type of the features instance,folder
: the path to the directory that contains the json files that defines the instances (and the associated features and labels),maxInstance
: a vector with three components, that say how many instances take for the training/validation/test set,seed
: a random seed used to select which instaces consider in the training/validation/test sets,factory
: type of instance,pTrain
: The percentage of training instances in the provided folder,pVal
: The percentage of validation instances in the provided folder.
Create a Corpus, that is a structure with three DataSet field for training,validation and test dataset. Note: The percentage for the test set will be 1-pTrain-pVal. It is important to select the percentage of training and validation in such a way that pTrain+pVal<1 and both will be non-negative.
LearningPi.createDataSet
— FunctionArguments:
lt
: the learning type.directory
: a list of paths to the instances.maxInstance
: the maximum number of instances that we wat consider in the provided directory. By default is equal to-1
, that means consider all the instances in the directory.factory
: type of instance, the possibilities in this moment are cpuMCNDinstanceFactory() (that is Multi-Commodity Network-Design instances) and cpuCWLinstanceFactory() (for the Bin Packing instances).
Create the dataset for the provided (general) learning type. return a dataSet structure of a proper type.
LearningPi.createDataSet
— FunctionArguments:
- `lt`:learning type, it should be a sub-type of `learningGNN`,
- `directory`: the path to the directory containing instances,
- `maxInstance`: maximum instance number,
- `factory`: instance factory, generic sub-type of `abstractInstanceFactory`.
Create and return a dataset for the provided learning type `lt`, considering `maxInst` instances of the factory `factory`, contained in `directory`
LearningPi.createKfold
— FunctionArguments:
featType
: the type of the features instance,folder
: the path to the directory that contains the json files that defines the instances (and the associated features and labels),maxInstance
: a vector with three components, that say how many instances take for the training/validation/test set,seed
: a random seed used to select which instaces consider in the training/validation/test sets,factory
: type of instance,k
: the fold that we want select as test set. Note: 1 <= k <= 10.
Create a Corpus, that is a struct with three DataSet field for training/validation/test set.
LearningPi.createLabels
— MethodArguments:
π
: a (optimal) Lagrangian multipliers vectorx
: the flow variables in the Lagrangian sub-problem, obtained afer the resolution of the sub-problem with multipliers π,y
: the design variables in the Lagrangian sub-problem, obtained afer the resolution of the sub-problem with multipliers π,LRarcs
: a vector containign the bounds for each edge of the Lagrangian sub-problem considering π as Lagrangian multipleirs vector,objLR
: the bound of the Lagrangian sub-problem considering π as Lagrangian multipleirs vector,ins
: the instance structure (standard instance formulation, without regularization).
Return a proper label structure.
LearningPi.createLabels
— MethodArguments:
-π
: optimal lagrangian multipliers vector, -x
: primal solution of the Knapsack Lagrangian Relaxation associated to the variables that associate one items to a pack (using the optimal Lagrangian multipliers), -y
: primal solution of the Knapsack Lagrangian Relaxation associated to the variables say if we use or not a pack (using the optimal Lagrangian multipliers), -objLR
: optimal value of the Lagrangian Dual,
ins
: instance object, it should be of type sub-type ofinstanceCWL
.
Given all the fields construct a label structure for the Capacitated Warehouse Location Problem.
LearningPi.createLabels
— MethodArguments:
-`π`: optimal lagrangian multipliers vector,
-`x`: primal solution of the Knapsack Lagrangian Relaxation associated to the variables that associate one items to a pack (using the optimal Lagrangian multipliers),
-`objLR`: optimal value of the Lagrangian Dual,
- `ins`: instance object, it should be of type `instanceGA`.
Given all the fields construct a label structure for the Generalized Assignment Problem.
LearningPi.create_example
— MethodArguments:
lt
: learning Type, this function works for all the learning types that use a graph representation of the instance,fileName
: path to the json that contains all the information to construct the learning sample starting from an instance, its features and the labels,factory
: instance factory, it works for all the factory.
Returns an gnnExample_instance
with all the information useful for the training.
LearningPi.create_example
— MethodArguments:
- lt: learning type, it should be learningMLP,
- fileName: the name of the file json that contains the informations about the instance, its features and its labels,
- factory: type of instance (it handle both with the normalized and un-normalized instances).
Create a structure containing the instance, the extracted features and the associated labels.
LearningPi.create_loss
— FunctionArguments:
-:
: loss parameters, it should be a structure of type HingeLoss.
return the loss correspondent to loss paramameters of type HingeLoss.
LearningPi.create_loss
— MethodArguments:
_
: a factory, for this implentation it should be of type ~loss_GAP_closure_factory
.
Return a loss corresponding to the factory.
LearningPi.create_loss
— MethodArguments:
_
: a factory, for this implentation it should be of type ~loss_GAP_factory
.
Return a loss corresponding to the factory.
LearningPi.create_loss
— MethodArguments:
_
: a factory, for this implentation it should be of type ~loss_LR_factory
.
Return a loss corresponding to the factory.
LearningPi.create_loss
— MethodArguments:
_
: a factory, for this implentation it should be of type ~loss_LR_gpu_factory
.
Return a loss corresponding to the factory.
LearningPi.create_loss
— MethodArguments:
_
: a factory, for this implentation it should be of type ~loss_mse_factory
.
Return a loss corresponding to the factory.
LearningPi.create_model
— FunctionArguments:
lType
: learning type, should belearningTransformer
,in
: size of the input of the neural network, for each node in the bipartite graph-representation,h
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,a
: activation function, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[500, 250, 100]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,block_number
: number of main-blocks that compose the core part of the encoder, by default5
,nodes_number
: dimension of the hidden-space for the features representation, by default500
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,final_A
: final activation function (in the space of Lagrangian multipliers, but before deviation), by defaultidentity
.
Returns the neural network model for learningTransformer
and the other provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
lType
: learning type, should belearningSampleGasse
,in
: size of the input of the neural network, for each node in the bipartite graph-representation,h
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,a
: activation function, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[500, 250, 100]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,block_number
: number of main-blocks that compose the core part of the encoder, by default5
,nodes_number
: dimension of the hidden-space for the features representation, by default500
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,final_A
: final activation function (in the space of Lagrangian multipliers, but before deviation), by defaultidentity
.
Returns the neural network model for learningSampleGasse
and the other provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
where_sample
: a structure composed by three booleans to say where perform sampling,in
: size of the input of the neural network, for each node in the bipartite graph-representation,nBlocks
: number of main-blocks that compose the core part of the encoder,nNodes
: dimension of the hidden-space for the features representation,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,act
: activation function for the parallel MLP, by defaultrelu
,act_conv
: activation function for the Graph-Convolutional Layers, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[100,250,500]
,hH
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder, by default[500]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,aggr
: the aggregation function, by defaultmean
,prediction_layers
: a vector that contains the indexes of the layers in which we want perform a prediction, by default[]
, in this case we use the decoder only in the last main-block of the Graphormer.
returns a model as defined in Graphormer.jl
using the provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
lType
: learning type, should belearningMultiPredSample
,in
: size of the input of the neural network, for each node in the bipartite graph-representation,h
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,a
: activation function, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[500, 250, 100]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,block_number
: number of main-blocks that compose the core part of the encoder, by default5
,nodes_number
: dimension of the hidden-space for the features representation, by default500
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,final_A
: final activation function (in the space of Lagrangian multipliers, but before deviation), by defaultidentity
.
Returns the neural network model for learningMultiPredSample
and the other provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
lType
: learning type, should belearningMultiPredTransformer
,in
: size of the input of the neural network, for each node in the bipartite graph-representation,h
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,a
: activation function, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[500, 250, 100]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,block_number
: number of main-blocks that compose the core part of the encoder, by default5
,nodes_number
: dimension of the hidden-space for the features representation, by default500
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,final_A
: final activation function (in the space of Lagrangian multipliers, but before deviation), by defaultidentity
.
Returns the neural network model for learningMultiPredTransformer
and the other provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
lType
: learning type, should belearningSampleTransformer
,in
: size of the input of the neural network, for each node in the bipartite graph-representation,h
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,a
: activation function, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[500, 250, 100]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,block_number
: number of main-blocks that compose the core part of the encoder, by default5
,nodes_number
: dimension of the hidden-space for the features representation, by default500
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,final_A
: final activation function (in the space of Lagrangian multipliers, but before deviation), by defaultidentity
.
Returns the neural network model for learningSampleTransformer
and the other provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
lType
: learning type, should belearningSampleOutside
,in
: size of the input of the neural network, for each node in the bipartite graph-representation,h
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,a
: activation function, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[500, 250, 100]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,block_number
: number of main-blocks that compose the core part of the encoder, by default5
,nodes_number
: dimension of the hidden-space for the features representation, by default500
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,final_A
: final activation function (in the space of Lagrangian multipliers, but before deviation), by defaultidentity
.
Returns the neural network model for learningSampleOutside
and the other provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
lType
: learning type, should belearningSampleNair
,in
: size of the input of the neural network, for each node in the bipartite graph-representation,h
: a vector containing the number of nodes in the hidden layers that composes the MLP inside the main-blocks of the Encoder,out
: the dimention of the output of the neural network model, for each dualized constraint, by default 1,a
: activation function, by defaultrelu
,seed
: random generation seed, by default1
,hI
: a vector containing the number of nodes in the hidden layers that composes the initial MLP that send the features into the hidden space representation, by default[500, 250, 100]
,hF
: a vector containing the number of nodes in the hidden layers that composes the final MLP in the Decoder, by default[500, 250, 100]
,block_number
: number of main-blocks that compose the core part of the encoder, by default5
,nodes_number
: dimension of the hidden-space for the features representation, by default500
,pDrop
: drop-out parameter, by default0.001
,dt
: deviation type, by defaultcr_deviation()
,std
: standard deviation used for the initialization of the nn parameters, by default0.00001
,norm
: a boolean to say if normalize or not during the GNN message passing, by defaulttrue
,final_A
: final activation function (in the space of Lagrangian multipliers, but before deviation), by defaultidentity
.
Returns the neural network model for learningSampleNair
and the other provided hyper-parameters.
LearningPi.create_model
— FunctionArguments:
lType
: general learningType,in
: size of the input layer,h
: a vector with the same length as the desired number of hidden layers and each component say how many nodes we want in the correspondent hidden layer,out
: size of the output layer, by default is equal to one,a
: the activation function for the hidden layers, by default isrelu
.
This function creates a model for the provided learning type (in order to use this variant it should be learningArc). The model is a multi layer perceptron with in
nodes in the first layer, length(h)
hidden layers (the i-th layer has h[i]
nodes) and out
nodes in the output layer (by default 1). By default each hidden layer use a relu activation function (the input and output layers have no activation function).
LearningPi.dataLoader
— MethodArguments
- `fileName` : a path to a json file that contains the data for the instance, features and labels
- `factory` : an instance factory for the Generalized Assignment problem
This function reads the instance, the features and the labels from the json and returns three structures that contains all the informations.
LearningPi.dataLoader
— MethodArguments
- `fileName` : a path to a json file that contains the data for the instance, features and labels,
- `factory` : an instance object (for the same instance as the features file).
It reads the instance, the features and the labels from the json and returns three structures that contains all the informations.
LearningPi.dataLoader
— MethodArguments
- `fileName` : a path to a json file that contains the data for the instance, features and labels,
- `factory` : an instance factory for the Capacitated Warehouse Location problem.
It reads the instance, the features and the labels from the json located in fileName
and returns three structures that contains all the informations.
LearningPi.featuresExtraction
— MethodArguments:
featType
features type,features
a features matrix,nbFeatures
the number of features.
Vectorization function for the features when we consider a learningNodeDemand encoding.
LearningPi.featuresExtraction
— MethodArguments:
lt
: learnign type, it should be a sub-type of learningType,featObj
: features object containing all the characteristics,ins
: instance structure, it should instanceGA,fmt
: features matrix type.
Returns the bipartite graph representation with the associated nodes-features matrix.
LearningPi.featuresExtraction
— MethodArguments:
lt
: learnign type, it should be a sub-type of learningGNN,featObj
: features object containing all the characteristics,ins
: instance structure, it should be a sub-type of abstractInstanceMCND.
Returns the bipartite graph representation with the associated nodes-features matrix.
LearningPi.featuresExtraction
— MethodArguments:
lt
: learnign type, it should be a sub-type oflearningType
,featObj
: features object containing all the characteristics,ins
: instance structure, it shouldinstanceCWL
,fmt
: features matrix type.
Returns the bipartite graph representation with the associated nodes-features matrix.
LearningPi.prediction
— MethodArguments:
-nn
: neural network model, -f
: features matrix, -ins
: structure containing the instance informations, -lt
: learning type (general).
provide the predicted Lagrangian multipliers.
LearningPi.train
— MethodArguments:
maxEp
: the maximum number of epochs for the learning algorithm,dS
: the Corpus structure that contains the training, validation and test sets,nn
: the neural network model,opt
: the optimizer used for the training,loss
: a structure that contains the parameters α and β of the loss,printEpoch
: the number of epochs in which print the metrics of the training,endString
: the string used to memorize the output files as best models and tensorboard logs,dt
: deviation type, it could deviate from zero or the duals of the continuous relaxation,lt
: learning type,seed
: random seed for the random generators,bs
batch size.
This function performs the learning with the provided inputs and save the best models in a bson file.