joyent-portal/spikes/leak
2017-03-07 17:06:25 +00:00
..
artillery leak client code 2016-11-25 09:26:53 +00:00
scripts fix leak tree script 2017-03-07 17:06:25 +00:00
src fix leak<->prometheus integration 2017-03-06 17:15:16 +00:00
static leak client code 2016-11-25 09:26:53 +00:00
.babelrc leak client code 2016-11-25 09:26:53 +00:00
.eslintignore leak client code 2016-11-25 09:26:53 +00:00
.eslintrc leak client code 2016-11-25 09:26:53 +00:00
.gitignore leak spike initial commit 2016-11-21 13:18:12 +00:00
docker-compose.yml fix leak<->prometheus integration 2017-03-06 17:15:16 +00:00
Dockerfile consume prometheus API to display graphs 2016-11-25 09:26:54 +00:00
metrics.json leak client code 2016-11-25 09:26:53 +00:00
package.json fix leak<->prometheus integration 2017-03-06 17:15:16 +00:00
prometheus.yml fix leak<->prometheus integration 2017-03-06 17:15:16 +00:00
readme.md improve readme 2016-11-25 10:34:08 +00:00
yarn.lock fix leak<->prometheus integration 2017-03-06 17:15:16 +00:00

leak

    1. Spawn a bunch of servers:
    • another-fast: a node with a linear memory leak
    • fast: a node with a linear memory leak
    • slow: a node with a memory leak that grows very slowly
    • plain: a node with no memory leak
    1. Spawn an artillery for each node that loads it with a small but constant stream of requests
    1. Spawn Prometheus that watches the cpu/memory of each node

Then, locally we start the same server and we can see the different instances and an aggregate of the metrics for each job.

usage

λ docker-compose up
λ node .

Go to http://127.0.0.1:8000/ and see the result. The Prometheus is also listening at http://127.0.0.1:9090/

example