lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Høydahl (JIRA) <j...@apache.org>
Subject [jira] [Comment Edited] (SOLR-8207) Modernise cloud tab on Admin UI
Date Thu, 02 Aug 2018 08:15:00 GMT

    [ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16565951#comment-16565951
] 

Jan Høydahl edited comment on SOLR-8207 at 8/2/18 8:14 AM:
-----------------------------------------------------------

Thanks for the feedback. I intend to commit to master asap and then get it into 7.5.

If anyone have time to look at the code in {{AdminHandlersProxy}}, especially security aspects,
that would be great. Here's an outline of the logic, is it water proof?
 # If the {{'nodes'}} parameter is not present in a call to systemInfo and metrics handler,
then the logic is exactly as before.
 # If {{'nodes'}} param is there, then {{AdminHandlersProxy}} code is executed, parsing nodes
string as comma separated list of nodeNames
 # If any nodeName is malformed, we throw an exception. Also if one of the node names does
not exist in live_nodes from zk, we exit. I.e. there should be no way to inject bogus URL
or node names not part of cluster
 # Then the request is fanned-out by AdminHandlersProxy to all nodes in the list and returned
in a combined response for consumption by "nodes" tab in Admin UI.
 # There's no upper-bound on the number of nodes that can be requested at a time, but for
"Nodes tab" use typically it will be 10, only the ones rendered per page. If {{nodes=all}}
is specified, then all live_nodes are consulted. Would it make sense to limit the number
of nodes in some way? There is a 10s timeout for each request, and the worst ting that could
happen in a system with huge number of nodes is that thins take too much time or times out.

I also like feedback on the approach for parallell sub-queries to all the nodes in a loop
using Futures. See method {{AdminHandlersProxy#callRemoteNode}} which will construct a new
SolrClient per sub request:
{code:java}
HttpSolrClient solr = new HttpSolrClient.Builder(baseUrl.toString()).build();
{code}
There is no way to inject an arbitrary URL in there from the API. I tested with basic Auth
enabled and it seemed to work, indicating that the sub requests use PKI authentication or
something? Anything that looks shaky?


was (Author: janhoy):
Thanks for the feedback. I intend to commit to master asap and then get it into 7.5.

If anyone have time to look at the code in {{AdminHandlersProxy}}, especially security aspects,
that would be great. Here's an outline of the logic, is it water proof?
 # If the {{'nodes'}} parameter is not present in a call to systemInfo and metrics handler,
then the logic is exactly as before.
 # If {{'nodes'}} param is there, then {{AdminHandlersProxy}} code is executed, parsing nodes
string as comma separated list of nodeNames
 # If any nodeName is malformed, we throw an exception. Also if one of the node names does
not exist in live_nodes from zk, we exit
 # Then the request is fanned-out by AdminHandlersProxy to all nodes in the list and returned
in a combined response by Admin UI.
 # There's no upper-bound on the number of nodes that can be requested at a time, but typically
it will be 10, only the ones rendered per page. If {{nodes=all}} is specified, then all live_nodes
are consulted. Would it make sense to limit the number of nodes in some way? There is a 10s
timeout for each request, and the worst ting that could happen in a system with huge number
of nodes is that thins take too much time or times out.

I also like feedback on the approach for parallell sub-queries to all the nodes in a loop
using Futures. See method {{AdminHandlersProxy#callRemoteNode}} which will construct a new
SolrClient per sub request:
{code:java}
HttpSolrClient solr = new HttpSolrClient.Builder(baseUrl.toString()).build();
{code}
There is no way to inject an arbitrary URL in there from the API. I tested with basic Auth
enabled and it seemed to work, indicating that the sub requests use PKI authentication or
something? Anything that looks shaky?

> Modernise cloud tab on Admin UI
> -------------------------------
>
>                 Key: SOLR-8207
>                 URL: https://issues.apache.org/jira/browse/SOLR-8207
>             Project: Solr
>          Issue Type: Improvement
>          Components: Admin UI
>    Affects Versions: 5.3
>            Reporter: Upayavira
>            Assignee: Jan Høydahl
>            Priority: Major
>             Fix For: master (8.0), 7.5
>
>         Attachments: SOLR-8207-refguide.patch, node-compact.png, node-details.png, node-hostcolumn.png,
node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was making real use
of SolrCloud, and when we didn't really know the use-cases we would need to support. I would
argue that, whilst they are pretty (and clever) they aren't really fit for purpose (with the
exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with many replicas
won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and their associated
replicas/collections. From this view, it would be possible to add/remove replicas and to see
the status of nodes. It would also be possible to filter nodes by status: "show me only up
nodes", "show me nodes that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission node" option,
that would ensure that no replicas on this node are leaders, and then remove all replicas
from the node, ready for it to be removed from the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message