View source on GitHub |
Computes best recall where precision is >= specified value.
Inherits From: Metric
tf.keras.metrics.RecallAtPrecision( precision, num_thresholds=200, class_id=None, name=None, dtype=None ) For a given score-label-distribution the required precision might not be achievable, in this case 0.0 is returned as recall.
This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the recall at the given precision. The threshold for the given precision value is computed and used to evaluate the corresponding recall.
If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values.
If class_id is specified, we calculate precision by considering only the entries in the batch for which class_id is above the threshold predictions, and computing the fraction of them for which class_id is indeed a correct label.
Example:
m = keras.metrics.RecallAtPrecision(0.8)m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9])m.result()0.5
m.reset_state()m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9],sample_weight=[1, 0, 0, 1])m.result()1.0
Usage with compile() API:
model.compile( optimizer='sgd', loss='binary_crossentropy', metrics=[keras.metrics.RecallAtPrecision(precision=0.8)]) Attributes | |
|---|---|
dtype | |
variables | |
Methods
add_variable
add_variable( shape, initializer, dtype=None, aggregation='sum', name=None ) add_weight
add_weight( shape=(), initializer=None, dtype=None, name=None ) from_config
@classmethodfrom_config( config )
get_config
get_config() Return the serializable config of the metric.
reset_state
reset_state() Reset all of the metric state variables.
This function is called between epochs/steps, when a metric is evaluated during training.
result
result() Compute the current metric value.
| Returns | |
|---|---|
| A scalar tensor, or a dictionary of scalar tensors. |
stateless_reset_state
stateless_reset_state() stateless_result
stateless_result( metric_variables ) stateless_update_state
stateless_update_state( metric_variables, *args, **kwargs ) update_state
update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics.
| Args | |
|---|---|
y_true | The ground truth values. |
y_pred | The predicted values. |
sample_weight | Optional weighting of each example. Defaults to 1. Can be a tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. |
__call__
__call__( *args, **kwargs ) Call self as a function.
View source on GitHub