1
0
mirror of https://github.com/yldio/copilot.git synced 2024-11-14 15:20:06 +02:00
copilot/spikes/leak
2016-11-25 10:34:08 +00:00
..
artillery leak client code 2016-11-25 09:26:53 +00:00
scripts fix artillery start script 2016-11-25 10:31:38 +00:00
src make fast leak linear 2016-11-25 10:32:59 +00:00
static leak client code 2016-11-25 09:26:53 +00:00
.babelrc leak client code 2016-11-25 09:26:53 +00:00
.eslintignore leak client code 2016-11-25 09:26:53 +00:00
.eslintrc leak client code 2016-11-25 09:26:53 +00:00
.gitignore leak spike initial commit 2016-11-21 13:18:12 +00:00
docker-compose.yml leak client code 2016-11-25 09:26:53 +00:00
Dockerfile consume prometheus API to display graphs 2016-11-25 09:26:54 +00:00
metrics.json leak client code 2016-11-25 09:26:53 +00:00
package.json consume prometheus API to display graphs 2016-11-25 09:26:54 +00:00
prometheus.yml consume prometheus API to display graphs 2016-11-25 09:26:54 +00:00
readme.md improve readme 2016-11-25 10:34:08 +00:00
yarn.lock consume prometheus API to display graphs 2016-11-25 09:26:54 +00:00

leak

    1. Spawn a bunch of servers:
    • another-fast: a node with a linear memory leak
    • fast: a node with a linear memory leak
    • slow: a node with a memory leak that grows very slowly
    • plain: a node with no memory leak
    1. Spawn an artillery for each node that loads it with a small but constant stream of requests
    1. Spawn Prometheus that watches the cpu/memory of each node

Then, locally we start the same server and we can see the different instances and an aggregate of the metrics for each job.

usage

λ docker-compose up
λ node .

Go to http://127.0.0.1:8000/ and see the result. The Prometheus is also listening at http://127.0.0.1:9090/

example