phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Araujo (JIRA)" <>
Subject [jira] [Commented] (PHOENIX-4413) Possible queryserver memory leak when reusing connections and statements
Date Fri, 01 Dec 2017 17:53:00 GMT


Alex Araujo commented on PHOENIX-4413:

bq. I see no reason. Using the same StatementID and ConnectionID are analogous to a JDBC application
which would use the same Statement and Connection instance (it's a loose mapping in PQS from
the ID to the Java Object).

Good to know. [~vik.karma] tried it and it worked as expected.

bq. How are you (or your team) determining the difference between "require memory usage" and
"unbounded growth"?

That was unclear to me at first, but after some discussion it appears this might be related
to using default heap sizing in their test environment.  We were not able to repro the issue
in our test environment. Closing the issue for now. Thanks for the help [~elserj].

> Possible queryserver memory leak when reusing connections and statements
> ------------------------------------------------------------------------
>                 Key: PHOENIX-4413
>                 URL:
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.12.0, 4.13.0
>            Reporter: Alex Araujo
> While testing client-side connection pooling using the [C# client from Microsoft|],
we attempted to avoid creating new connections and statements for every Phoenix statement
execution (essentially just a simple SELECT for single key).  The results were very positive
from a performance perspective.  However, after a certain amount of statements executed in
this manner, memory on the PQS appears to spike, and performance degrades significantly.
> Steps to Recreate
> Setup
> 1) Create the table: "CREATE TABLE  <TableName> (TestKey varchar(255) PRIMARY KEY,
TestValue varchar(10000))".
> 2) Populate the table with 100 random TestKey and TestValue records.
> Execution (if done with one thread, this can take up to 24 hours, so we multithreaded
> 1) Create connection using OpenConnectionRequestAsync.
> 2) Create statement using CreateStatementRequestAsync.
> 3) Loop n times, selecting a record with a single random key: "SELECT TestKey, TestValue
FROM <TableName> WHERE TestKey = <TestKey>'' issued using PrepareAndExecuteRequestAsync.
> 4) Close statement.
> 5) Close connection.
> Teardown
> 1) Drop the table.

This message was sent by Atlassian JIRA

View raw message