Close

Introduction to TIGK stack for IoT

vintage-computer-commiteeVintage Computer Commitee wrote 08/07/2017 at 17:03 • 6 min read • Like

The TIGK Stack is a collection of associated technologies which combine to deliver a platform for storing, capturing, monitoring and visualizing data that is in time series. The TIGK stack consists of the following technologies:

Basic Feature Set of TIGK

As this is a combination of technologies, it is best to introduce the key features of each part of the TIGK stack thus:

The Main Benefits of Using TIGK

The TIGK stack offers some pretty comprehensive benefits to developers who choose to use it. TIGK offers true real-time analysis of data streams, as well as on-the-fly processing and computation.

When this capability is combined with the fact that historical data can be aggregated into a real-time stream at the same time, the possibility to perform not only real-time analysis but also historic analysis exists. This delivers the capability to use TIGK to implement a solid predictive analytics platform.

TIGK is rapid to implement, and because it is distributed under an open source license, the cost of ownership is relatively low once developers are up to speed.

Use Cases for TIGK

TICG aligns well with many potential use cases. It especially fits uses which rely upon triggering events based on constant real-time data streams. An excellent example of this would be fleet tracking. TIGK can monitor the fleet data in real-time and create an alert condition if something out of the ordinary occurs. It can also visualize the fleet in its entirety, creating a real-time dashboard of fleet status.

IoT devices are also a strong point for TIGK. Solutions that rely upon many IoT devices combining date streams to build an overall view, such as an automated manufacturing line, work well with TIGK. TIGK can trigger alert events, and visualize the entire status of a production line easily.

Setup

The quickest and most manageable way to deploy a TIGK Stack is with a docker swarm cluster. Below is a copy of our docker-compose yaml file template for your review.

With this one just needs a few things:

  1. A VPS, AWS, or an Openstack account
  2. docker-machine
  3. docker-compose
  4. docker-ce

Installation and setup of the above tools is out of scope for this article but will be covered in a future video.

Docker-compose.yml:

version: '3'
services:
  proxy:
    image: jwilder/nginx-proxy
    environment:
      - "DEFAULT_HOST: portainer.examle.com"
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
    ports:
      - "80:80/tcp"
    networks:
      - influx
  # FRONT
  portainer:
    # full tag list: https://hub.docker.com/r/library/portainer/tags/
    image: portainer/portainer:latest
    container_name: "portainer-app"
    command:
      - "--tlsverify"
      - "--no-auth"
  chronograf:
    # Full tag list: https://hub.docker.com/r/library/chronograf/tags/
    image: chronograf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
    environment:
      - "VIRTUAL_PORT=8888"
      - "VIRTUAL_HOST=monitoring.example.com"
    volumes:
      # Mount for chronograf database
      - chronograf-data:/var/lib/chronograf
    ports:
      # The WebUI for Chronograf is served on port 8888
      - "8888:8888"
    networks:
      - influx
    depends_on:
      - kapacitor
      - influxdb
  # MIDDLE
  kapacitor:
    # Full tag list: https://hub.docker.com/r/library/kapacitor/tags/
    image: kapacitor
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
    volumes:
      # Mount for kapacitor data directory
      - kapacitor-data:/var/lib/kapacitor
      # Mount for kapacitor configuration
      - /etc/kapacitor/config:/etc/kapacitor
    ports:
      # The API for Kapacitor is served on port 9092
      - "9092:9092"
    networks:
      - influx
    depends_on:
      - influxdb
  # BACK
  telegraf:
    # Full tag list: https://hub.docker.com/r/library/telegraf/tags/
    image: telegraf
    deploy:
      mode: global
      restart_policy:
        condition: on-failure
    volumes:
      # Mount for telegraf configuration
      - /etc/telegraf:/etc/telegraf
      # Mount for Docker API access
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - influx
    depends_on:
      - influxdb
  watchtower:
    image: "v2tec/watchtower:latest"
    container_name: "portainer-watchtower"
    volumes:
      - /var/run/docker.sock:var/run/docker.sock
    healthcheck:
      test: "exit 0"
  # DATABASE
  influxdb:
    # Full tag list: https://hub.docker.com/r/library/influxdb/tags/
    image: influxdb
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
    volumes:
      # Mount for influxdb data directory
      - influxdb-data:/var/lib/influxdb
      # Mount for influxdb configuration
      - /etc/influxdb/config:/etc/influxdb
    ports:
      # The API for InfluxDB is served on port 8086
      - "8086:8086"
    networks:
      - influx
networks:
  influx:
volumes:
  chronograf-data:
  kapacitor-data:
  influxdb-data:
Like

Discussions