mxnet-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jay Vercellone (JIRA)" <>
Subject [jira] [Commented] (MXNET-688) Fix quantization divide by zero errors
Date Sun, 07 Oct 2018 21:44:00 GMT


Jay Vercellone commented on MXNET-688:

[~oneraynyday] the referenced PR was merged back in July. Is this issue solved?

> Fix quantization divide by zero errors
> --------------------------------------
>                 Key: MXNET-688
>                 URL:
>             Project: Apache MXNet
>          Issue Type: Bug
>            Reporter: Ray Zhang
>            Priority: Critical
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
> The current quantization strategy for `calib_mode='entropy'` is to calculate the KL divergence
for different thresholds and choose the best threshold. This assumes that the random variable
is nonzero for all reals and is a continuous random variable. Because we are discretizing
the distribution, we smooth the distribution over the range `[-threshold, threshold]`. What
we are not considering is that the entire sampled distribution may be not in the range `[-threshold,
threshold]` and thus we end up with all zeros in the sampled candidate `p` distribution inside
of `_get_optimal_threshold`.
> I have added a check that the distribution(possibly unnormalized) is proper before attempting
to smooth or else we'll run into a divide by 0 error.
> In most cases, activation functions and layers for classification type problems output
numbers symmetric around 0. This is not the case for a regressor's last layer, and there are
various other examples where the activation distribution is not around 0, and this was a major
blockage for airbnb's adoption into mxnet's quantization capabilities.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message