phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jamestay...@apache.org>
Subject Re: simple commands that mutate a large number of rows
Date Tue, 15 Sep 2015 14:07:29 GMT
That config setting (phoenix.mutate.maxSiize) is just a safety valve to
prevent out of memory errors and may be set to whatever you like. However,
if you're going to just turn around and do a commit after running your
upsert statement, performance will improve if you turn on auto commit
instead (connection.setAutoCommit(true)), as that'll prevent double
buffering and enable execution completely on the server side when possible.

Thanks,
James

On Tuesday, September 15, 2015, James Heather <james.heather@mendeley.com>
wrote:

> I found today that I can't execute this:
>
> UPSERT INTO loadtest.testing (id, firstname, lastname) SELECT NEXT VALUE FOR loadtest.testing_id_seq,
firstname, lastname FROM loadtest.testing
>
>
> when the table has more than 500,000 rows in it ("MutationState size of
> 512000 is bigger than max allowed size of 500000").
>
> Is there a reason for the restriction, or a way of lifting it so that
> things get processed in batches?
>
> Does this also mean that if I had a large table, and I wanted to (say)
> null out one of the columns, I wouldn't be able to do it in one hit because
> I'd be changing too many rows?
>
> James
>
>

Mime
View raw message