Skip to content

f1

Bases: metric

The f1 evaluation metric.

The class inherits from the base metric class.

...

Attributes:

Name Type Description
name str, default = 'f1'

Name of the accuracy evaluation metric.

metric object

The accuracy evaluation metric calculation method.

average str, default = 'binary'

The average parameter used for the metric calculation. It takes value from {‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} or None, default=’binary’

Methods:

Name Description
__init__

It performs the initialization of the f1 evaluation metric. Its internal metric calculation method is declared to be f1_score from sklearn.

evaluate

It implements the abstract evaluate method declared in the base metric class. The method calculates the f1 score of the input prediction labels.

__call__

It reimplements the abstract callable method declared in the base metric class.

Source code in tinybig/metric/classification_metric.py
class f1(metric):
    """
    The f1 evaluation metric.

    The class inherits from the base metric class.

    ...

    Attributes
    ----------
    name: str, default = 'f1'
        Name of the accuracy evaluation metric.
    metric: object
        The accuracy evaluation metric calculation method.
    average: str, default = 'binary'
        The average parameter used for the metric calculation. It takes value from {‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} or None, default=’binary’

    Methods
    ----------
    __init__
        It performs the initialization of the f1 evaluation metric. Its internal metric calculation method is declared to be f1_score from sklearn.

    evaluate
        It implements the abstract evaluate method declared in the base metric class. The method calculates the f1 score of the input prediction labels.

    __call__
        It reimplements the abstract callable method declared in the base metric class.

    """
    def __init__(self, name: str = 'f1', average: str = 'binary'):
        """
        The initialization method of the f1 evaluation metric.

        It initializes an f1 evaluation metric object based on the input metric name.
        This method will also call the initialization method of the base class as well.
        The metric calculation approach is initialized as the sklearn.metrics.f1_score with the default average parameter "binary".

        Parameters
        ----------
        name: str, default = 'f1'
            The name of the evaluation metric.
        average: str, default = 'binary'
            The average parameter of the f1 evaluation metric.
        """
        super().__init__(name=name)
        self.metric = f1_score
        self.average = average

    def evaluate(self, y_true: list, y_pred: list, average=None, *args, **kwargs):
        """
        The evaluate method of the f1 evaluation metric class.

        It calculates the accuracy scores based on the provided input parameters "y_true" and "y_pred".
        The method will return calculated f1 score as the output.

        Examples
        ----------
        Binary classification f1 score
        >>> from tinybig.metric import f1 as f1_metric
        >>> y_true = [1, 1, 0, 0]
        >>> y_pred = [1, 1, 0, 1]
        >>> f1_metric = f1_metric(name='f1_metric', average='binary')
        >>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
        0.8

        Multi-class Classification f1 score
        >>> y_true = [0, 1, 2, 0, 1, 2]
        >>> y_pred = [0, 2, 1, 0, 0, 1]
        >>> f1_metric_macro = f1_metric(name='f1_metric_macro', average='macro')
        >>> f1_metric_macro.evaluate(y_pred=y_pred, y_true=y_true)
        0.26...
        >>> f1_metric_micro = f1_metric(name='f1_metric_micro', average='micro')
        >>> f1_metric_micro.evaluate(y_true=y_true, y_pred=y_pred)
        0.33...
        >>> f1_metric_micro = f1_metric(name='f1_metric_micro', average='micro')
        >>> f1_metric_micro.evaluate(y_true=y_true, y_pred=y_pred)
        0.26...
        >>> f1_metric = f1_metric(name='f1_metric', average=None)
        >>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
        array([0.8, 0. , 0. ])

        Multi-label classification f1 score
        >>> y_true = [[0, 0, 0], [1, 1, 1], [0, 1, 1]]
        >>> y_pred = [[0, 0, 0], [1, 1, 1], [1, 1, 0]]
        >>> f1_metric = f1_metric(name='f1_metric', average=None)
        >>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
        array([0.66666667, 1.        , 0.66666667])

        Parameters
        ----------
        y_true: list
            The list of true labels of data instances.
        y_pred: list
            The list of predicted labels of data instances.
        args: list
            Other parameters
        kwargs: dict
            Other parameters

        Returns
        -------
        float | list
            The calculated f1 score of the input parameters.
        """
        average = average if average is not None else self.average
        return self.metric(y_true=y_true, y_pred=y_pred, average=average)

    def __call__(self, y_true: list, y_pred: list, *args, **kwargs):
        """
        The callable method of the f1 metric class.

        It re-implements the build-in callable method.
        This method will call the evaluate method to calculate the f1 of the input parameters.

        Examples
        ----------
        Binary classification f1 score
        >>> from tinybig.metric import f1 as f1_metric
        >>> y_pred = [1, 1, 0, 0]
        >>> y_true = [1, 1, 0, 1]
        >>> f1_metric = f1_metric(name='f1_metric', average='binary')
        >>> f1_metric(y_true=y_true, y_pred=y_pred)
        0.8

        Parameters
        ----------
        y_true: list
            The list of true labels of data instances.
        y_pred: list
            The list of predicted labels of data instances.
        args: list
            Other parameters
        kwargs: dict
            Other parameters

        Returns
        -------
        float | list
            The calculated f1 score of the input parameters.
        """
        return self.evaluate(y_true=y_true, y_pred=y_pred, *args, **kwargs)

__call__(y_true, y_pred, *args, **kwargs)

The callable method of the f1 metric class.

It re-implements the build-in callable method. This method will call the evaluate method to calculate the f1 of the input parameters.

Examples:

Binary classification f1 score

>>> from tinybig.metric import f1 as f1_metric
>>> y_pred = [1, 1, 0, 0]
>>> y_true = [1, 1, 0, 1]
>>> f1_metric = f1_metric(name='f1_metric', average='binary')
>>> f1_metric(y_true=y_true, y_pred=y_pred)
0.8

Parameters:

Name Type Description Default
y_true list

The list of true labels of data instances.

required
y_pred list

The list of predicted labels of data instances.

required
args

Other parameters

()
kwargs

Other parameters

{}

Returns:

Type Description
float | list

The calculated f1 score of the input parameters.

Source code in tinybig/metric/classification_metric.py
def __call__(self, y_true: list, y_pred: list, *args, **kwargs):
    """
    The callable method of the f1 metric class.

    It re-implements the build-in callable method.
    This method will call the evaluate method to calculate the f1 of the input parameters.

    Examples
    ----------
    Binary classification f1 score
    >>> from tinybig.metric import f1 as f1_metric
    >>> y_pred = [1, 1, 0, 0]
    >>> y_true = [1, 1, 0, 1]
    >>> f1_metric = f1_metric(name='f1_metric', average='binary')
    >>> f1_metric(y_true=y_true, y_pred=y_pred)
    0.8

    Parameters
    ----------
    y_true: list
        The list of true labels of data instances.
    y_pred: list
        The list of predicted labels of data instances.
    args: list
        Other parameters
    kwargs: dict
        Other parameters

    Returns
    -------
    float | list
        The calculated f1 score of the input parameters.
    """
    return self.evaluate(y_true=y_true, y_pred=y_pred, *args, **kwargs)

__init__(name='f1', average='binary')

The initialization method of the f1 evaluation metric.

It initializes an f1 evaluation metric object based on the input metric name. This method will also call the initialization method of the base class as well. The metric calculation approach is initialized as the sklearn.metrics.f1_score with the default average parameter "binary".

Parameters:

Name Type Description Default
name str

The name of the evaluation metric.

'f1'
average str

The average parameter of the f1 evaluation metric.

'binary'
Source code in tinybig/metric/classification_metric.py
def __init__(self, name: str = 'f1', average: str = 'binary'):
    """
    The initialization method of the f1 evaluation metric.

    It initializes an f1 evaluation metric object based on the input metric name.
    This method will also call the initialization method of the base class as well.
    The metric calculation approach is initialized as the sklearn.metrics.f1_score with the default average parameter "binary".

    Parameters
    ----------
    name: str, default = 'f1'
        The name of the evaluation metric.
    average: str, default = 'binary'
        The average parameter of the f1 evaluation metric.
    """
    super().__init__(name=name)
    self.metric = f1_score
    self.average = average

evaluate(y_true, y_pred, average=None, *args, **kwargs)

The evaluate method of the f1 evaluation metric class.

It calculates the accuracy scores based on the provided input parameters "y_true" and "y_pred". The method will return calculated f1 score as the output.

Examples:

Binary classification f1 score

>>> from tinybig.metric import f1 as f1_metric
>>> y_true = [1, 1, 0, 0]
>>> y_pred = [1, 1, 0, 1]
>>> f1_metric = f1_metric(name='f1_metric', average='binary')
>>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
0.8

Multi-class Classification f1 score

>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> f1_metric_macro = f1_metric(name='f1_metric_macro', average='macro')
>>> f1_metric_macro.evaluate(y_pred=y_pred, y_true=y_true)
0.26...
>>> f1_metric_micro = f1_metric(name='f1_metric_micro', average='micro')
>>> f1_metric_micro.evaluate(y_true=y_true, y_pred=y_pred)
0.33...
>>> f1_metric_micro = f1_metric(name='f1_metric_micro', average='micro')
>>> f1_metric_micro.evaluate(y_true=y_true, y_pred=y_pred)
0.26...
>>> f1_metric = f1_metric(name='f1_metric', average=None)
>>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
array([0.8, 0. , 0. ])

Multi-label classification f1 score

>>> y_true = [[0, 0, 0], [1, 1, 1], [0, 1, 1]]
>>> y_pred = [[0, 0, 0], [1, 1, 1], [1, 1, 0]]
>>> f1_metric = f1_metric(name='f1_metric', average=None)
>>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
array([0.66666667, 1.        , 0.66666667])

Parameters:

Name Type Description Default
y_true list

The list of true labels of data instances.

required
y_pred list

The list of predicted labels of data instances.

required
args

Other parameters

()
kwargs

Other parameters

{}

Returns:

Type Description
float | list

The calculated f1 score of the input parameters.

Source code in tinybig/metric/classification_metric.py
def evaluate(self, y_true: list, y_pred: list, average=None, *args, **kwargs):
    """
    The evaluate method of the f1 evaluation metric class.

    It calculates the accuracy scores based on the provided input parameters "y_true" and "y_pred".
    The method will return calculated f1 score as the output.

    Examples
    ----------
    Binary classification f1 score
    >>> from tinybig.metric import f1 as f1_metric
    >>> y_true = [1, 1, 0, 0]
    >>> y_pred = [1, 1, 0, 1]
    >>> f1_metric = f1_metric(name='f1_metric', average='binary')
    >>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
    0.8

    Multi-class Classification f1 score
    >>> y_true = [0, 1, 2, 0, 1, 2]
    >>> y_pred = [0, 2, 1, 0, 0, 1]
    >>> f1_metric_macro = f1_metric(name='f1_metric_macro', average='macro')
    >>> f1_metric_macro.evaluate(y_pred=y_pred, y_true=y_true)
    0.26...
    >>> f1_metric_micro = f1_metric(name='f1_metric_micro', average='micro')
    >>> f1_metric_micro.evaluate(y_true=y_true, y_pred=y_pred)
    0.33...
    >>> f1_metric_micro = f1_metric(name='f1_metric_micro', average='micro')
    >>> f1_metric_micro.evaluate(y_true=y_true, y_pred=y_pred)
    0.26...
    >>> f1_metric = f1_metric(name='f1_metric', average=None)
    >>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
    array([0.8, 0. , 0. ])

    Multi-label classification f1 score
    >>> y_true = [[0, 0, 0], [1, 1, 1], [0, 1, 1]]
    >>> y_pred = [[0, 0, 0], [1, 1, 1], [1, 1, 0]]
    >>> f1_metric = f1_metric(name='f1_metric', average=None)
    >>> f1_metric.evaluate(y_true=y_true, y_pred=y_pred)
    array([0.66666667, 1.        , 0.66666667])

    Parameters
    ----------
    y_true: list
        The list of true labels of data instances.
    y_pred: list
        The list of predicted labels of data instances.
    args: list
        Other parameters
    kwargs: dict
        Other parameters

    Returns
    -------
    float | list
        The calculated f1 score of the input parameters.
    """
    average = average if average is not None else self.average
    return self.metric(y_true=y_true, y_pred=y_pred, average=average)