phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jody Landreneau <jodylandren...@gmail.com>
Subject client cache
Date Fri, 15 Aug 2014 18:56:36 GMT
hello phoenix devs,

Let me explain an issue I would like to solve. We have multiple phoenix
clients running, which could be on several physical machines(diff vms)
which act as storage/retreival endpoints. If I change the schema of a
table, by adding or removing a field, I get errors in the clients that
didn't issue the alter. This is due to an internal client cache that is not
refreshed. I note that the connections get their cache from this shared
client cache so creating/closing connections does not help.

I would like to add a timed expiration cache also limited by size to
address this issue.  I see that there is a guava cache for server side and
think that doing something similar on the client side makes sense. It could
make things much simpler than having to deal with a pruner and other code.
I was wondering if the community would accept an approach like this. Also,
we could reduce all the cloning of the cache, potentially just sharing one
for connections that belong to a client.  I see that there is some work to
try and manage the capacity of the number of bytes that the cache has.
Would it be reasonable to just make the capacity based off the number of
tables the cache holds instead of byte detail? It seems that the objects
should be fairly light weight and if you go to an approach of sharing the
same cache across connection then it should use even less resources.

I would like to know if there are some reasons for not taking this approach?

thanks in advance --

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message