# Implicit sequence models¶

Models for recommending items given a sequence of previous items a user has interacted with.

class spotlight.sequence.implicit.ImplicitSequenceModel(loss='pointwise', representation='pooling', embedding_dim=32, n_iter=10, batch_size=256, l2=0.0, learning_rate=0.01, optimizer_func=None, use_cuda=False, sparse=False, random_state=None, num_negative_samples=5)[source]

Model for sequential recommendations using implicit feedback.

Parameters: loss (string, optional) – The loss function for approximating a softmax with negative sampling. One of ‘pointwise’, ‘bpr’, ‘hinge’, ‘adaptive_hinge’, corresponding to losses from spotlight.losses. representation (string or instance of spotlight.sequence.representations, optional) – Sequence representation to use. If string, it must be one of ‘pooling’, ‘cnn’, ‘lstm’, ‘mixture’; otherwise must be one of the representations from spotlight.sequence.representations embedding_dim (int, optional) – Number of embedding dimensions to use for representing items. Overridden if representation is an instance of a representation class. n_iter (int, optional) – Number of iterations to run. batch_size (int, optional) – Minibatch size. l2 (float, optional) – L2 loss penalty. learning_rate (float, optional) – Initial learning rate. optimizer_func (function, optional) – Function that takes in module parameters as the first argument and returns an instance of a PyTorch optimizer. Overrides l2 and learning rate if supplied. If no optimizer supplied, then use ADAM by default. use_cuda (boolean, optional) – Run the model on a GPU. sparse (boolean, optional) – Use sparse gradients for embedding layers. random_state (instance of numpy.random.RandomState, optional) – Random state to use when fitting. num_negative_samples (int, optional) – Number of negative samples to generate for adaptive hinge loss.

Notes

During fitting, the model computes the loss for each timestep of the supplied sequence. For example, suppose the following sequences are passed to the fit function:

[[1, 2, 3, 4, 5],
[0, 0, 7, 1, 4]]


In this case, the loss for the first example will be the mean loss of trying to predict 2 from [1], 3 from [1, 2], 4 from [1, 2, 3] and so on. This means that explicit padding of all subsequences is not necessary (although it is possible by using the step_size parameter of spotlight.interactions.Interactions.to_sequence().

fit(interactions, verbose=False)[source]

Fit the model.

When called repeatedly, model fitting will resume from the point at which training stopped in the previous fit call.

Parameters: interactions (spotlight.interactions.SequenceInteractions) – The input sequence dataset.
predict(sequences, item_ids=None)[source]

Make predictions: given a sequence of interactions, predict the next item in the sequence.

Parameters: sequences (array, (1 x max_sequence_length)) – Array containing the indices of the items in the sequence. item_ids (array (num_items x 1), optional) – Array containing the item ids for which prediction scores are desired. If not supplied, predictions for all items will be computed. predictions – Predicted scores for all items in item_ids. array