trafodion-codereview mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zellerh <...@git.apache.org>
Subject [GitHub] incubator-trafodion pull request #1228: [TRAFODION-2733] Provide an improved...
Date Wed, 13 Sep 2017 00:14:32 GMT
Github user zellerh commented on a diff in the pull request:

    https://github.com/apache/incubator-trafodion/pull/1228#discussion_r138498231
  
    --- Diff: core/sql/cli/Globals.cpp ---
    @@ -1099,6 +1100,52 @@ void CliGlobals::deleteContexts()
     }
     #endif  // _DEBUG
     
    +// The unused BMO memory quota can now be utilized by the other
    +// BMO instances from the same or different fragment
    +// In case of ESP process, the unused memory quota is maintained
    +// at the defaultContext. In case of master process, the ununsed
    +// memory quota is maintained in the context of the connection
    +
    +NABoolean CliGlobals::grabMemoryQuotaIfAvailable(ULng32 size)
    +{
    +  ContextCli *context;
    +  if (espProcess_)
    +     context = defaultContext_;
    +  else
    +     context = currContext();
    +  return context->grabMemoryQuotaIfAvailable(size);
    +}
    +
    +void CliGlobals::resetMemoryQuota() 
    +{
    +  ContextCli *context;
    +  if (espProcess_)
    +     context = defaultContext_;
    +  else
    +     context = currContext();
    +  return context->resetMemoryQuota();
    +}
    +
    +ULng32 CliGlobals::unusedMemoryQuota() 
    +{ 
    +  ContextCli *context;
    +  if (espProcess_)
    +     context = defaultContext_;
    +  else
    +     context = currContext();
    +  return context->unusedMemoryQuota();
    +}
    +
    +void CliGlobals::yieldMemoryQuota(ULng32 size)
    +{
    +  ContextCli *context;
    +  if (espProcess_)
    +     context = defaultContext_;
    +  else
    +     context = currContext();
    --- End diff --
    
    Thanks, sounds ok, although I don't quite understand why we wouldn't treat both master
and ESP the same. Otherwise, when we switch between serial and parallel plans we may see an
unexpected change in behavior? This leads me to a more general comment: I wonder whether it
would be better to set an overall limit per connection, not per node. I wonder whether this
would be easier to manage, both for users as well as for the SQL engine.


---

Mime
View raw message