Skip to content

Performance

Sea Yo edited this page Mar 4, 2016 · 24 revisions

When we talk about the performance of Eywa, we are usually talking about how many devices/connections can be tracked concurrently with limited hardware resources. This page provides some ideas about how it performs to this sense and also describes the steps to reproduce the benchmark.

Test Setup

Server Configuration:

1 CPU / 1 GB, Ubuntu 14.04.3x64, 30GB SSD, San Francisco

Basic Tunning: ulimit -n 1048576

Octopus configuration:

auto_reload: 10m
service:
  host: localhost
  http_port: 8080
  ws_port: 8081
  pid_file: /var/octopus/octopus.pid
security:
  dashboard:
    username: root
    password: waterISwide
    token_expiry: 24h
    aes:
      key: abcdefg123456789
      iv: abcdefg123456789
  ssl:
    certfile:
    keyfile:
connections:
  registry: memory
  nshards: 8
  init_shard_size: 1024
  request_queue_size: 8
  expiry: &expiry 300s
  timeouts:
    write: 2s
    read: *expiry
    request: 2s
    response: 8s
  buffer_sizes:
    read: 1024
    write: 1024
indices:
  host: localhost
  port: 9200
  number_of_shards: 8
  number_of_replicas: 0
  ttl_enabled: true
  ttl: '336h'
database:
  db_type: sqlite3
  db_file: /var/octopus/octopus.db
logging:
  octopus:
    filename: /var/octopus/octopus.log
    maxsize: 1024
    maxage: 7
    maxbackups: 5
    level: info
    buffer_size: 1024
  indices:
    filename: /var/octopus/indices.log
    maxsize: 1024
    maxage: 7
    maxbackups: 5
    level: warn
    buffer_size: 1024
  database:
    filename: /var/octopus/db.log
    maxsize: 1024
    maxage: 7
    maxbackups: 5
    level: warn
    buffer_size: 1024
Client Configuration:

2 client servers.

Each with 2 CPU / 4 GB, Ubuntu 14.04.3x64, 60GB SSD, San Francisco

Basic Tunning: ulimit -n 1048576

Benchmark command:

go run tasks/benchmark.go -host=<server host> -ports=8080:8081 -user=root -passwd=waterISwide -fields=temperature:float -c=20000 -p=5 -m=5 -r=300s -w=10s -i=20000 -I=3 > bench.log 2>&1

For details about benchmark options, please check go run tasks/benchmark.go -h

Test Result

We skipped the indexing while benchmarking Octopus. Because with indexing, the performance bottleneck is essentially the Elasticsearch. Improving Elasticsearch indexing involves details about clustering management and more, which are not part of Octopus project.

On this 1 cpu / 1gb mem virtualized node from Digital Ocean, Octopus managed to keep track of 40,000 connections at the same time. All messages, and ping-pong were performed without errors.

load avg: 0.35, 0.23, 0.17

cpu percentage: 23.6%

memory percentage: 73%

Aftering pushing the number of connections to 45,000. Octopus was killed by system due to memory usage.

We believe that with some tcp tunning, an 8CPU/32GB server or up, can track more than 1 million devices. We will try to setup another benchmark test to see how to make this happen.

Clone this wiki locally