theano.tensor.nnet.ctc – Connectionist Temporal Classification (CTC) loss

Note

Usage of connectionist temporal classification (CTC) loss Op, requires that the warp-ctc library is available. In case the warp-ctc library is not in your compiler’s library path, the config.ctc.root configuration option must be appropriately set to the directory containing the warp-ctc library files.

Note

This interface is the prefered interface. It will be moved automatically to the GPU.

Note

Unfortunately, Windows platforms are not yet supported by the underlying library.

theano.tensor.nnet.ctc.ctc(activations, labels, input_lengths)[source]

Compute CTC loss function.

Notes

Using the loss function requires that the Baidu’s warp-ctc library be installed. If the warp-ctc library is not on the compiler’s default library path, the configuration variable config.ctc.root must be properly set.

Parameters
  • activations – Three-dimensional tensor, which has a shape of (t, m, p), where t is the time index, m is the minibatch index, and p is the index over the probabilities of each symbol in the alphabet. The memory layout is assumed to be in C-order, which consists in the slowest to the fastest changing dimension, from left to right. In this case, p is the fastest changing dimension.

  • labels – A 2-D tensor of all the labels for the minibatch. In each row, there is a sequence of target labels. Negative values are assumed to be padding, and thus are ignored. Blank symbol is assumed to have index 0 in the alphabet.

  • input_lengths – A 1-D tensor with the number of time steps for each sequence in the minibatch.

Returns

Cost of each example in the minibatch.

Return type

1-D array

class theano.tensor.nnet.ctc.ConnectionistTemporalClassification(compute_grad=True, openmp=None)[source]

CTC loss function wrapper.

Notes

Using the wrapper requires that Baidu’s warp-ctc library is installed. If the warp-ctc library is not on your compiler’s default library path, you must set the configuration variable config.ctc.root appropriately.

Parameters

compute_grad – If set to True, enables the computation of gradients of the CTC loss function.