phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Elser (JIRA)" <>
Subject [jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock
Date Tue, 07 Jun 2016 19:50:21 GMT


Josh Elser commented on PHOENIX-2940:

bq.    If the client-side cache is too small, then the client might be querying too often
for the stats, but I'm not sure what the best way to prevent this would be. Ideally, we really
want to have a propensity to cache the stats that are asked for most frequently. If we go
the timer route, and the client-side cache is too small, then it just becomes less and less
likely that the stats are in the cache when we need them (essentially disabling stats). It'd
help if we had PHOENIX-2675 so that tables that don't need stats wouldn't fill up the cache.

Agreed, limiting the contents of the cache based on size is rather difficult to work around
but important so that we don't blow out the client's heap. The normal eviction stuff will
work pretty well (evicting the least-recently used elements first), but, like you point out,
that doesn't help if the cache is woefully small to begin with. I could think up some tricky
solution which would warn the user if we were continually re-fetching the stats, but that
would still require human interaction which is "meh".

Actually, the use of {{expireAfterWrite}} is a little nonsense, I think.

        final long halfStatsUpdateFreq = config.getLong(
                QueryServicesOptions.DEFAULT_STATS_UPDATE_FREQ_MS) / 2;
        final long maxTableStatsCacheEntries = config.getLong(
        tableStatsCache = CacheBuilder.newBuilder()
                .expireAfterWrite(halfStatsUpdateFreq, TimeUnit.MILLISECONDS)

So, we have some frequency at which we update stats for a table (default 15mins). After 7m30s
from caching the stats for a table, we'll evict it, regardless of how often (or not) it was
accessed. This seems goofball -- {{expireAfterAccess(long)}} seems to make much more sense
to me (will remove it if it isn't accessed at all after 7m30s). This also matches what is
currently done in {{GlobalCache}}.

Trying to understand this even more, in {{MetaDataClient#updateStatisticsInternal(...)}},
we also use half of {{QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB}} (when {{QueryServices.MIN_STATS_UPDATE_FREQ_MS_ATTRIB}}
is unset) to determine if we should ask the server to refresh the stats. Why are we doing
this at half of the update frequency?

> Remove STATS RPCs from rowlock
> ------------------------------
>                 Key: PHOENIX-2940
>                 URL:
>             Project: Phoenix
>          Issue Type: Improvement
>         Environment: HDP 2.3 + Apache Phoenix 4.6.0
>            Reporter: Nick Dimiduk
>            Assignee: Josh Elser
>             Fix For: 4.9.0
>         Attachments: PHOENIX-2940.001.patch
> We have an unfortunate situation wherein we potentially execute many RPCs while holding
a row lock. This is problem is discussed in detail on the user list thread ["Write path blocked
by MetaDataEndpoint acquiring region lock"|].
During some situations, the [MetaDataEndpoint|]
coprocessor will attempt to refresh it's view of the schema definitions and statistics. This
involves [taking a rowlock|],
executing a scan against the [local region|],
and then a scan against a [potentially remote|]
statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps (in my case,
the use of the ROW_TIMESTAMP feature, or perhaps as in PHOENIX-2607). When combined with other
issues (PHOENIX-2939), we end up with total gridlock in our handler threads -- everyone queued
behind the rowlock, scanning and rescanning SYSTEM.STATS. Because this happens in the MetaDataEndpoint,
the means by which all clients refresh their knowledge of schema, gridlock in that RS can
effectively stop all forward progress on the cluster.

This message was sent by Atlassian JIRA

View raw message