aurora-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject svn commit: r1840515 [15/15] - in /aurora/site: publish/blog/aurora-0-21-0-released/ publish/documentation/0.21.0/ publish/documentation/0.21.0/additional-resources/ publish/documentation/0.21.0/additional-resources/presentations/ publish/documentation...
Date Tue, 11 Sep 2018 05:28:12 GMT
Added: aurora/site/source/documentation/0.21.0/reference/
--- aurora/site/source/documentation/0.21.0/reference/ (added)
+++ aurora/site/source/documentation/0.21.0/reference/ Tue Sep 11
05:28:10 2018
@@ -0,0 +1,272 @@
+# Scheduler Configuration Reference
+The Aurora scheduler can take a variety of configuration options through command-line arguments.
+A list of the available options can be seen by running `aurora-scheduler -help`.
+Please refer to the [Operator Configuration Guide](../../operations/configuration/) for details
on how
+to properly set the most important options.
+$ aurora-scheduler -help
+-h or -help to print this help message
+Required flags:
+-backup_dir [not null]
+	Directory to store backups under. Will be created if it does not exist.
+-cluster_name [not null]
+	Name to identify the cluster being served.
+	Properties file which contains framework credentials to authenticate with Mesosmaster. Must
contain the properties 'aurora_authentication_principal' and 'aurora_authentication_secret'.
+	The ip address to listen. If not set, the scheduler will listen on all interfaces.
+-mesos_master_address [not null]
+	Address for the mesos master, can be a socket address or zookeeper path.
+	The Mesos role this framework will register as. The default is to left this empty, and the
framework will register without any role and only receive unreserved resources in offer.
+-serverset_path [not null, must be non-empty]
+	ZooKeeper ServerSet path to register at.
+	Fully qualified class name of the servlet filter to be applied after the shiro auth filters
are applied.
+	Path to the thermos executor entry point.
+-tier_config [file must be readable]
+	Configuration file defining supported task tiers, task traits and behaviors.
+-webhook_config [file must exist, file must be readable]
+	Path to webhook configuration file.
+-zk_endpoints [must have at least 1 item]
+	Endpoint specification for the ZooKeeper servers.
+Optional flags:
+-allow_container_volumes (default false)
+	Allow passing in volumes in the job. Enabling this could pose a privilege escalation threat.
+-allow_docker_parameters (default false)
+	Allow to pass docker container parameters in the job.
+-allow_gpu_resource (default false)
+	Allow jobs to request Mesos GPU resource.
+-allowed_container_types (default [MESOS])
+	Container types that are allowed to be used by jobs.
+-allowed_job_environments (default ^(prod|devel|test|staging\d*)$)
+	Regular expression describing the environments that are allowed to be used by jobs.
+-async_slot_stat_update_interval (default (1, mins))
+	Interval on which to try to update open slot stats.
+-async_task_stat_update_interval (default (1, hrs))
+	Interval on which to try to update resource consumption stats.
+-async_worker_threads (default 8)
+	The number of worker threads to process async task operations with.
+-backup_interval (default (1, hrs))
+	Minimum interval on which to write a storage backup.
+-cron_scheduler_num_threads (default 10)
+	Number of threads to use for the cron scheduler thread pool.
+-cron_scheduling_max_batch_size (default 10) [must be > 0]
+	The maximum number of triggered cron jobs that can be processed in a batch.
+-cron_start_initial_backoff (default (5, secs))
+	Initial backoff delay while waiting for a previous cron run to be killed.
+-cron_start_max_backoff (default (1, mins))
+	Max backoff delay while waiting for a previous cron run to be killed.
+-cron_timezone (default GMT)
+	TimeZone to use for cron predictions.
+-custom_executor_config [file must exist, file must be readable]
+	Path to custom executor settings configuration file.
+-default_docker_parameters (default {})
+	Default docker parameters for any job that does not explicitly declare parameters.
+-dlog_max_entry_size (default (512, KB))
+	Specifies the maximum entry size to append to the log. Larger entries will be split across
entry Frames.
+-dlog_shutdown_grace_period (default (2, secs))
+	Specifies the maximum time to wait for scheduled checkpoint and snapshot actions to complete
before forcibly shutting down.
+-dlog_snapshot_interval (default (1, hrs))
+	Specifies the frequency at which snapshots of local storage are taken and written to the
+	List of domains for which CORS support should be enabled.
+-enable_mesos_fetcher (default false)
+	Allow jobs to pass URIs to the Mesos Fetcher. Note that enabling this feature could pose
a privilege escalation threat.
+-enable_preemptor (default true)
+	Enable the preemptor and preemption
+-enable_revocable_cpus (default true)
+	Treat CPUs as a revocable resource.
+-enable_revocable_ram (default false)
+	Treat RAM as a revocable resource.
+-executor_user (default root)
+	User to start the executor. Defaults to "root". Set this to an unprivileged user if the
mesos master was started with "--no-root_submissions". If set to anything other than "root",
the executor will ignore the "role" setting for jobs since it can't use setuid() anymore.
This means that all your jobs will run under the specified user and the user has to exist
on the Mesos agents.
+-first_schedule_delay (default (1, ms))
+	Initial amount of time to wait before first attempting to schedule a PENDING task.
+-flapping_task_threshold (default (5, mins))
+	A task that repeatedly runs for less than this time is considered to be flapping.
+-framework_announce_principal (default false)
+	When 'framework_authentication_file' flag is set, the FrameworkInfo registered with the
mesos master will also contain the principal. This is necessary if you intend to use mesos
authorization via mesos ACLs. The default will change in a future release. Changing this value
is backwards incompatible. For details, see MESOS-703.
+-framework_failover_timeout (default (21, days))
+	Time after which a framework is considered deleted.  SHOULD BE VERY HIGH.
+-framework_name (default Aurora)
+	Name used to register the Aurora framework with Mesos.
+-global_container_mounts (default [])
+	A comma separated list of mount points (in host:container form) to mount into all (non-mesos)
+-history_max_per_job_threshold (default 100)
+	Maximum number of terminated tasks to retain in a job history.
+-history_min_retention_threshold (default (1, hrs))
+	Minimum guaranteed time for task history retention before any pruning is attempted.
+-history_prune_threshold (default (2, days))
+	Time after which the scheduler will prune terminated task history.
+-host_maintenance_polling_interval (default (1, minute))
+	Interval between polling for pending host maintenance requests.
+	The hostname to advertise in ZooKeeper instead of the locally-resolved hostname.
+-http_authentication_mechanism (default NONE)
+	HTTP Authentication mechanism to use.
+-http_port (default 0)
+	The port to start an HTTP server on.  Default value will choose a random port.
+-initial_flapping_task_delay (default (30, secs))
+	Initial amount of time to wait before attempting to schedule a flapping task.
+-initial_schedule_penalty (default (1, secs))
+	Initial amount of time to wait before attempting to schedule a task that has failed to schedule.
+-initial_task_kill_retry_interval (default (5, secs))
+	When killing a task, retry after this delay if mesos has not responded, backing off up to
+-job_update_history_per_job_threshold (default 10)
+	Maximum number of completed job updates to retain in a job update history.
+-job_update_history_pruning_interval (default (15, mins))
+	Job update history pruning interval.
+-job_update_history_pruning_threshold (default (30, days))
+	Time after which the scheduler will prune completed job update history.
+-kerberos_debug (default false)
+	Produce additional Kerberos debugging output.
+	Path to the server keytab.
+	Kerberos server principal to use, usually of the form HTTP/
+-max_flapping_task_delay (default (5, mins))
+	Maximum delay between attempts to schedule a flapping task.
+-max_leading_duration (default (1, days))
+	After leading for this duration, the scheduler should commit suicide.
+-max_parallel_coordinated_maintenance (default 10)
+	Maximum number of coordinators that can be contacted in parallel.
+-max_registration_delay (default (1, mins))
+	Max allowable delay to allow the driver to register before aborting
+-max_reschedule_task_delay_on_startup (default (30, secs))
+	Upper bound of random delay for pending task rescheduling on scheduler startup.
+-max_saved_backups (default 48)
+	Maximum number of backups to retain before deleting the oldest backups.
+-max_schedule_attempts_per_sec (default 40.0)
+	Maximum number of scheduling attempts to make per second.
+-max_schedule_penalty (default (1, mins))
+	Maximum delay between attempts to schedule a PENDING tasks.
+-max_sla_duration_secs (default (2, hrs))
+	Maximum duration window for which SLA requirements are to be satisfied. This does not apply
to jobs that have a CoordinatorSlaPolicy.
+-max_status_update_batch_size (default 1000) [must be > 0]
+	The maximum number of status updates that can be processed in a batch.
+-max_task_event_batch_size (default 300) [must be > 0]
+	The maximum number of task state change events that can be processed in a batch.
+-max_tasks_per_job (default 4000) [must be > 0]
+	Maximum number of allowed tasks in a single job.
+-max_tasks_per_schedule_attempt (default 5) [must be > 0]
+	The maximum number of tasks to pick in a single scheduling attempt.
+-max_update_instance_failures (default 20000) [must be > 0]
+	Upper limit on the number of failures allowed during a job update. This helps cap potentially
unbounded entries into storage.
+-min_offer_hold_time (default (5, mins))
+	Minimum amount of time to hold a resource offer before declining.
+-min_required_instances_for_sla_check (default 20)
+	Minimum number of instances required for a job to be eligible for SLA check. This does not
apply to jobs that have a CoordinatorSlaPolicy.
+-native_log_election_retries (default 20)
+	The maximum number of attempts to obtain a new log writer.
+-native_log_election_timeout (default (15, secs))
+	The timeout for a single attempt to obtain a new log writer.
+	Path to a file to store the native log data in.  If the parent directory doesnot exist it
will be created.
+-native_log_quorum_size (default 1)
+	The size of the quorum required for all log mutations.
+-native_log_read_timeout (default (5, secs))
+	The timeout for doing log reads.
+-native_log_write_timeout (default (3, secs))
+	The timeout for doing log appends and truncations.
+	A zookeeper node for use by the native log to track the master coordinator.
+-offer_filter_duration (default (5, secs))
+	Duration after which we expect Mesos to re-offer unused resources. A short duration improves
scheduling performance in smaller clusters, but might lead to resource starvation for other
frameworks if you run many frameworks in your cluster.
+-offer_hold_jitter_window (default (1, mins))
+	Maximum amount of random jitter to add to the offer hold time window.
+-offer_reservation_duration (default (3, mins))
+	Time to reserve a agent's offers while trying to satisfy a task preempting another.
+-offer_set_module (default [class org.apache.aurora.scheduler.offers.OfferSetModule])
+  Guice module for replacing offer holding and scheduling logic.
+-partition_aware (default false)
+  Whether or not to integrate with the partition-aware Mesos capabilities.
+-populate_discovery_info (default false)
+	If true, Aurora populates DiscoveryInfo field of Mesos TaskInfo.
+-preemption_delay (default (3, mins))
+	Time interval after which a pending task becomes eligible to preempt other tasks
+-preemption_slot_finder_modules (default [class org.apache.aurora.scheduler.preemptor.PendingTaskProcessorModule,
class org.apache.aurora.scheduler.preemptor.PreemptionVictimFilterModule])
+  Guice modules for replacing preemption logic.
+-preemption_slot_hold_time (default (5, mins))
+	Time to hold a preemption slot found before it is discarded.
+-preemption_slot_search_interval (default (1, mins))
+	Time interval between pending task preemption slot searches.
+-receive_revocable_resources (default false)
+	Allows receiving revocable resource offers from Mesos.
+-reconciliation_explicit_batch_interval (default (5, secs))
+	Interval between explicit batch reconciliation requests.
+-reconciliation_explicit_batch_size (default 1000) [must be > 0]
+	Number of tasks in a single batch request sent to Mesos for explicit reconciliation.
+-reconciliation_explicit_interval (default (60, mins))
+	Interval on which scheduler will ask Mesos for status updates of all non-terminal tasks
known to scheduler.
+-reconciliation_implicit_interval (default (60, mins))
+	Interval on which scheduler will ask Mesos for status updates of all non-terminal tasks
known to Mesos.
+-reconciliation_initial_delay (default (1, mins))
+	Initial amount of time to delay task reconciliation after scheduler start up.
+-reconciliation_schedule_spread (default (30, mins))
+	Difference between explicit and implicit reconciliation intervals intended to create a non-overlapping
task reconciliation schedule.
+-require_docker_use_executor (default true)
+	If false, Docker tasks may run without an executor (EXPERIMENTAL)
+-scheduling_max_batch_size (default 3) [must be > 0]
+	The maximum number of scheduling attempts that can be processed in a batch.
+-serverset_endpoint_name (default http)
+	Name of the scheduler endpoint published in ZooKeeper.
+	Path to shiro.ini for authentication and authorization configuration.
+-shiro_realm_modules (default [class])
+	Guice modules for configuring Shiro Realms.
+-sla_aware_action_max_batch_size (default 300) [must be > 0]
+	The maximum number of sla aware update actions that can be processed in a batch.
+-sla_aware_kill_retry_min_delay (default (1, min)) [must be > 0]
+	The minimum amount of time to wait before retrying an SLA-aware kill (using a truncated
binary backoff).
+-sla_aware_kill_retry_max_delay (default (5, min)) [must be > 0]
+	The maximum amount of time to wait before retrying an SLA-aware kill (using a truncated
binary backoff).
+-sla_coordinator_timeout (default (1, min)) [must be > 0]
+	Timeout interval for communicating with Coordinator.
+-sla_non_prod_metrics (default [])
+	Metric categories collected for non production tasks.
+-sla_prod_metrics (default [JOB_UPTIMES, PLATFORM_UPTIME, MEDIANS])
+	Metric categories collected for production tasks.
+-sla_stat_refresh_interval (default (1, mins))
+	The SLA stat refresh interval.
+-stat_retention_period (default (1, hrs))
+	Time for a stat to be retained in memory before expiring.
+-stat_sampling_interval (default (1, secs))
+	Statistic value sampling interval.
+-task_assigner_modules (default [class org.apache.aurora.scheduler.scheduling.TaskAssignerImplModule])
+  Guice modules for replacing task assignment logic.
+-thermos_executor_cpu (default 0.25)
+	The number of CPU cores to allocate for each instance of the executor.
+	Extra arguments to be passed to the thermos executor
+-thermos_executor_ram (default (128, MB))
+	The amount of RAM to allocate for each instance of the executor.
+-thermos_executor_resources (default [])
+	A comma separated list of additional resources to copy into the sandbox.Note: if thermos_executor_path
is not the thermos_executor.pex file itself, this must include it.
+-thermos_home_in_sandbox (default false)
+	If true, changes HOME to the sandbox before running the executor. This primarily has the
effect of causing the executor and runner to extract themselves into the sandbox.
+-thrift_method_interceptor_modules (default [])
+	Additional Guice modules for intercepting Thrift method calls.
+-transient_task_state_timeout (default (5, mins))
+	The amount of time after which to treat a task stuck in a transient state as LOST.
+-viz_job_url_prefix (default )
+	URL prefix for job container stats.
+	chroot path to use for the ZooKeeper connections
+	user:password to use when authenticating with ZooKeeper.
+-zk_in_proc (default false)
+	Launches an embedded zookeeper server for local testing causing -zk_endpoints to be ignored
if specified.
+-zk_session_timeout (default (4, secs))
+	The ZooKeeper session timeout.
+-zk_use_curator (default true)
+	DEPRECATED: Uses Apache Curator as the zookeeper client; otherwise a copy of Twitter commons/zookeeper
(the legacy library) is used.

Added: aurora/site/source/documentation/0.21.0/reference/
--- aurora/site/source/documentation/0.21.0/reference/ (added)
+++ aurora/site/source/documentation/0.21.0/reference/ Tue Sep 11 05:28:10
@@ -0,0 +1,19 @@
+# HTTP endpoints
+There are a number of HTTP endpoints that the Aurora scheduler exposes. These allow various
+operational tasks to be performed on the scheduler. Below is an (incomplete) list of such
+and a brief explanation of what they do.
+## Leader health
+The /leaderhealth endpoint enables performing health checks on the scheduler instances inorder
+to forward requests to the leading scheduler. This is typically used by a load balancer such
+HAProxy or AWS ELB.
+When a HTTP GET request is issued on this endpoint, it responds as follows:
+- If the instance that received the GET request is the leading scheduler, a HTTP status code
+  `200 OK` is returned.
+- If the instance that received the GET request is not the leading scheduler but a leader
+  exist, a HTTP status code of `503 SERVICE_UNAVAILABLE` is returned.
+- If no leader currently exists or the leader is unknown, a HTTP status code of `502 BAD_GATEWAY`
+  is returned.

Added: aurora/site/source/documentation/0.21.0/reference/
--- aurora/site/source/documentation/0.21.0/reference/ (added)
+++ aurora/site/source/documentation/0.21.0/reference/ Tue Sep 11 05:28:10
@@ -0,0 +1,175 @@
+# Task Lifecycle
+When Aurora reads a configuration file and finds a `Job` definition, it:
+1.  Evaluates the `Job` definition.
+2.  Splits the `Job` into its constituent `Task`s.
+3.  Sends those `Task`s to the scheduler.
+4.  The scheduler puts the `Task`s into `PENDING` state, starting each
+    `Task`'s life cycle.
+![Life of a task](../images/lifeofatask.png)
+Please note, a couple of task states described below are missing from
+this state diagram.
+## PENDING to RUNNING states
+When a `Task` is in the `PENDING` state, the scheduler constantly
+searches for machines satisfying that `Task`'s resource request
+requirements (RAM, disk space, CPU time) while maintaining configuration
+constraints such as "a `Task` must run on machines  dedicated  to a
+particular role" or attribute limit constraints such as "at most 2
+`Task`s from the same `Job` may run on each rack". When the scheduler
+finds a suitable match, it assigns the `Task` to a machine and puts the
+`Task` into the `ASSIGNED` state.
+From the `ASSIGNED` state, the scheduler sends an RPC to the agent
+machine containing `Task` configuration, which the agent uses to spawn
+an executor responsible for the `Task`'s lifecycle. When the scheduler
+receives an acknowledgment that the machine has accepted the `Task`,
+the `Task` goes into `STARTING` state.
+`STARTING` state initializes a `Task` sandbox. When the sandbox is fully
+initialized, Thermos begins to invoke `Process`es. Also, the agent
+machine sends an update to the scheduler that the `Task` is
+in `RUNNING` state, only after the task satisfies the liveness requirements.
+See [Health Checking](../features/services#health-checking) for more details
+for how to configure health checks.
+## RUNNING to terminal states
+There are various ways that an active `Task` can transition into a terminal
+state. By definition, it can never leave this state. However, depending on
+nature of the termination and the originating `Job` definition
+(e.g. `service`, `max_task_failures`), a replacement `Task` might be
+### Natural Termination: FINISHED, FAILED
+A `RUNNING` `Task` can terminate without direct user interaction. For
+example, it may be a finite computation that finishes, even something as
+simple as `echo hello world.`, or it could be an exceptional condition in
+a long-lived service. If the `Task` is successful (its underlying
+processes have succeeded with exit status `0` or finished without
+reaching failure limits) it moves into `FINISHED` state. If it finished
+after reaching a set of failure limits, it goes into `FAILED` state.
+A terminated `TASK` which is subject to rescheduling will be temporarily
+`THROTTLED`, if it is considered to be flapping. A task is flapping, if its
+previous invocation was terminated after less than 5 minutes (scheduler
+default). The time penalty a task has to remain in the `THROTTLED` state,
+before it is eligible for rescheduling, increases with each consecutive
+### Forceful Termination: KILLING, RESTARTING
+You can terminate a `Task` by issuing an `aurora job kill` command, which
+moves it into `KILLING` state. The scheduler then sends the agent a
+request to terminate the `Task`. If the scheduler receives a successful
+response, it moves the Task into `KILLED` state and never restarts it.
+If a `Task` is forced into the `RESTARTING` state via the `aurora job restart`
+command, the scheduler kills the underlying task but in parallel schedules
+an identical replacement for it.
+In any case, the responsible executor on the agent follows an escalation
+sequence when killing a running task:
+  1. If a `HttpLifecycleConfig` is not present, skip to (4).
+  2. Send a POST to the `graceful_shutdown_endpoint` and wait
+  `graceful_shutdown_wait_secs` seconds.
+  3. Send a POST to the `shutdown_endpoint` and wait
+  `shutdown_wait_secs` seconds.
+  4. Send SIGTERM (`kill`) and wait at most `finalization_wait` seconds.
+  5. Send SIGKILL (`kill -9`).
+If the executor notices that all `Process`es in a `Task` have aborted
+during this sequence, it will not proceed with subsequent steps.
+Note that graceful shutdown is best-effort, and due to the many
+inevitable realities of distributed systems, it may not be performed.
+### Unexpected Termination: LOST
+If a `Task` stays in a transient task state for too long (such as `ASSIGNED`
+or `STARTING`), the scheduler forces it into `LOST` state, creating a new
+`Task` in its place that's sent into `PENDING` state.
+In addition, if the Mesos core tells the scheduler that a agent has
+become unhealthy (or outright disappeared), the `Task`s assigned to that
+agent go into `LOST` state and new `Task`s are created in their place.
+From `PENDING` state, there is no guarantee a `Task` will be reassigned
+to the same machine unless job constraints explicitly force it there.
+If Aurora is configured to enable partition awareness, a task which is in a
+running state can transition to `PARTITIONED`. This happens when the state
+of the task in Mesos becomes unknown. By default Aurora errs on the side of
+availability, so all tasks that transition to `PARTITIONED` are immediately
+transitioned to `LOST`.
+This policy is not ideal for all types of workloads you may wish to run in
+your Aurora cluster, e.g. for jobs where task failures are very expensive.
+So job owners may set their own `PartitionPolicy` where they can control
+how long to remain in `PARTITIONED` before transitioning to `LOST`. Or they
+can disable any automatic attempts to `reschedule` when in `PARTITIONED`,
+effectively waiting out the partition for as long as possible.
+## PARTITIONED and transient states
+The `PartitionPolicy` provided by users only applies to tasks which are
+currently running. When tasks are moving in and out of transient states,
+e.g. tasks being updated, restarted, preempted, etc., `PARTITIONED` tasks
+are moved immediately to `LOST`. This prevents situations where system
+or user-initiated actions are blocked indefinitely waiting for partitions
+to resolve (that may never be resolved).
+### Giving Priority to Production Tasks: PREEMPTING
+Sometimes a Task needs to be interrupted, such as when a non-production
+Task's resources are needed by a higher priority production Task. This
+type of interruption is called a *pre-emption*. When this happens in
+Aurora, the non-production Task is killed and moved into
+the `PREEMPTING` state  when both the following are true:
+- The task being killed is a non-production task.
+- The other task is a `PENDING` production task that hasn't been
+  scheduled due to a lack of resources.
+The scheduler UI shows the non-production task was preempted in favor of
+the production task. At some point, tasks in `PREEMPTING` move to `KILLED`.
+Note that non-production tasks consuming many resources are likely to be
+preempted in favor of production tasks.
+### Making Room for Maintenance: DRAINING
+Cluster operators can set agent into maintenance mode. This will transition
+all `Task` running on this agent into `DRAINING` and eventually to `KILLED`.
+Drained `Task`s will be restarted on other agents for which no maintenance
+has been announced yet.
+## State Reconciliation
+Due to the many inevitable realities of distributed systems, there might
+be a mismatch of perceived and actual cluster state (e.g. a machine returns
+from a `netsplit` but the scheduler has already marked all its `Task`s as
+`LOST` and rescheduled them).
+Aurora regularly runs a state reconciliation process in order to detect
+and correct such issues (e.g. by killing the errant `RUNNING` tasks).
+By default, the proper detection of all failure scenarios and inconsistencies
+may take up to an hour.
+To emphasize this point: there is no uniqueness guarantee for a single
+instance of a job in the presence of network partitions. If the `Task`
+requires that, it should be baked in at the application level using a
+distributed coordination service such as Zookeeper.

Added: aurora/site/source/documentation/latest/features/
--- aurora/site/source/documentation/latest/features/ (added)
+++ aurora/site/source/documentation/latest/features/ Tue Sep 11 05:28:10
@@ -0,0 +1,185 @@
+SLA Requirements
+- [Overview](#overview)
+- [Default SLA](#default-sla)
+- [Custom SLA](#custom-sla)
+  - [Count-based](#count-based)
+  - [Percentage-based](#percentage-based)
+  - [Coordinator-based](#coordinator-based)
+## Overview
+Aurora guarantees SLA requirements for jobs. These requirements limit the impact of cluster-wide
+maintenance operations on the jobs. For instance, when an operator upgrades
+the OS on all the Mesos agent machines, the tasks scheduled on them needs to be drained.
+By specifying the SLA requirements a job can make sure that it has enough instances to
+continue operating safely without incurring downtime.
+> SLA is defined as minimum number of active tasks required for a job every duration window.
+A task is active if it was in `RUNNING` state during the last duration window.
+There is a [default](#default-sla) SLA guarantee for
+[preferred](../../features/multitenancy/#configuration-tiers) tier jobs and it is also possible
+specify [custom](#custom-sla) SLA requirements.
+## Default SLA
+Aurora guarantees a default SLA requirement for tasks in
+[preferred](../../features/multitenancy/#configuration-tiers) tier.
+> 95% of tasks in a job will be `active` for every 30 mins.
+## Custom SLA
+For jobs that require different SLA requirements, Aurora allows jobs to specify their own
+SLA requirements via the `SlaPolicies`. There are 3 different ways to express SLA requirements.
+### [Count-based](../../reference/configuration/#countslapolicy-objects)
+For jobs that need a minimum `number` of instances to be running all the time,
+provides the ability to express the minimum number of required active instances (i.e. number
+tasks that are `RUNNING` for at least `duration_secs`). For instance, if we have a
+`replicated-service` that has 3 instances and needs at least 2 instances every 30 minutes
to be
+treated healthy, the SLA requirement can be expressed with a
+[`CountSlaPolicy`](../../reference/configuration/#countslapolicy-objects) like below,
+  name = 'replicated-service',
+  role = 'www-data',
+  instances = 3,
+  sla_policy = CountSlaPolicy(
+    count = 2,
+    duration_secs = 1800
+  )
+  ...
+### [Percentage-based](../../reference/configuration/#percentageslapolicy-objects)
+For jobs that need a minimum `percentage` of instances to be running all the time,
+[`PercentageSlaPolicy`](../../reference/configuration/#percentageslapolicy-objects) provides
+ability to express the minimum percentage of required active instances (i.e. percentage of
+that are `RUNNING` for at least `duration_secs`). For instance, if we have a `webservice`
+has 10000 instances for handling peak load and cannot have more than 0.1% of the instances
+for every 1 hr, the SLA requirement can be expressed with a
+[`PercentageSlaPolicy`](../../reference/configuration/#percentageslapolicy-objects) like
+  name = 'frontend-service',
+  role = 'www-data',
+  instances = 10000,
+  sla_policy = PercentageSlaPolicy(
+    percentage = 99.9,
+    duration_secs = 3600
+  )
+  ...
+### [Coordinator-based](../../reference/configuration/#coordinatorslapolicy-objects)
+When none of the above methods are enough to describe the SLA requirements for a job, then
the SLA
+calculation can be off-loaded to a custom service called the `Coordinator`. The `Coordinator`
+to expose an endpoint that will be called to check if removal of a task will affect the SLA
+requirements for the job. This is useful to control the number of tasks that undergoes maintenance
+at a time, without affected the SLA for the application.
+Consider the example, where we have a `storage-service` stores 2 replicas of an object. Each
+is distributed across the instances, such that replicas are stored on different hosts. In
+a consistent-hash is used for distributing the data across the instances.
+When an instance needs to be drained (say for host-maintenance), we have to make sure that
at least 1 of
+the 2 replicas remains available. In such a case, a `Coordinator` service can be used to
+the SLA guarantees required for the job.
+The job can be configured with a
+[`CoordinatorSlaPolicy`](../../reference/configuration/#coordinatorslapolicy-objects) to
specify the
+coordinator endpoint and the field in the response JSON that indicates if the SLA will be
+or not affected, when the task is removed.
+  name = 'storage-service',
+  role = 'www-data',
+  sla_policy = CoordinatorSlaPolicy(
+    coordinator_url = '',
+    status_key = 'drain'
+  )
+  ...
+#### Coordinator Interface [Experimental]
+When a [`CoordinatorSlaPolicy`](../../reference/configuration/#coordinatorslapolicy-objects)
+specified for a job, any action that requires removing a task
+(such as drains) will be required to get approval from the `Coordinator` before proceeding.
+coordinator service needs to expose a HTTP endpoint, that can take a `task-key` param
+(`<cluster>/<role>/<env>/<name>/<instance>`) and a json body
describing the task
+details, force maintenance countdown (ms) and other params and return a response json that
+contain the boolean status for allowing or disallowing the task's removal.
+##### Request:
+  ?task=<cluster>/<role>/<env>/<name>/<instance>
+  "forceMaintenanceCountdownMs": "604755646",
+  "task": "cluster/role/devel/job/1",
+  "taskConfig": {
+    "assignedTask": {
+      "taskId": "taskA",
+      "slaveHost": "a",
+      "task": {
+        "job": {
+          "role": "role",
+          "environment": "devel",
+          "name": "job"
+        },
+        ...
+      },
+      "assignedPorts": {
+        "http": 1000
+      },
+      "instanceId": 1
+      ...
+    },
+    ...
+  }
+##### Response:
+  "drain": true
+If Coordinator allows removal of the task, then the task’s
+[termination lifecycle](../../reference/configuration/#httplifecycleconfig-objects)
+is triggered. If Coordinator does not allow removal, then the request will be retried again
in the
+#### Coordinator Actions
+Coordinator endpoint get its own lock and this is used to serializes calls to the Coordinator.
+It guarantees that only one concurrent request is sent to a coordinator endpoint. This allows
+coordinators to simply look the current state of the tasks to determine its SLA (without
+to worry about in-flight and pending requests). However if there are multiple coordinators,
+maintenance can be done in parallel across all the coordinators.
+_Note: Single concurrent request to a coordinator endpoint does not translate as exactly-once
+guarantee. The coordinator must be able to handle duplicate drain
+requests for the same task._

View raw message