Compare commits

...

15 Commits
v0.0.3 ... main

Author SHA1 Message Date
Mark Nellemann a55cc12fa5 Update README.md 2024-05-17 06:18:44 +00:00
Mark Nellemann f5440d764c Update version
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2023-11-22 10:50:13 +01:00
Mark Nellemann 98db370f0b Dependency updates and cleanup.
continuous-integration/drone/push Build is passing Details
2023-11-22 10:47:00 +01:00
Mark Nellemann c7644a9137 Revert gradle plugin update (due to Java-8 compatibility)
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/tag Build is passing Details
2023-08-08 14:41:44 +02:00
Mark Nellemann 046470eec1 Update 3rd party deps.
continuous-integration/drone/push Build is failing Details
continuous-integration/drone/tag Build is failing Details
2023-08-08 14:37:36 +02:00
Mark Nellemann 9468c1b695 Merge pull request 'Support for InfluxDB 2.x (now requires 1.8 or later)' (#2) from influxdb2 into main
continuous-integration/drone/push Build is passing Details
Reviewed-on: #2
2023-05-25 13:23:01 +00:00
Mark Nellemann 04d3d8d0cd Fix typos.
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/pr Build is passing Details
2023-05-25 15:18:53 +02:00
Mark Nellemann 88e45c21d9 Merge conflicts
continuous-integration/drone/push Build is passing Details
2023-05-25 15:17:10 +02:00
Mark Nellemann 0995d91287 Support for InfluxDB 2.x (now requires 1.8 or later).
continuous-integration/drone/push Build is passing Details
continuous-integration/drone/pr Build is failing Details
2023-05-20 12:38:35 +02:00
Mark Nellemann 708302d1ab Fix typo in config example and update some build deps. 2023-05-10 10:54:41 +02:00
Mark Nellemann 3b9119f9ad Update documentation.
continuous-integration/drone/push Build is passing Details
2023-03-08 10:28:27 +01:00
Mark Nellemann 39c227b2de Update dashboard link.
continuous-integration/drone/push Build is passing Details
2023-02-06 19:40:27 +01:00
Mark Nellemann 3146ad455a Provide screenshot in README and update dependencies.
continuous-integration/drone/push Build is passing Details
2023-02-06 19:21:33 +01:00
Mark Nellemann 5bfbeccbd2 Update links.
continuous-integration/drone/push Build is passing Details
2023-01-18 15:45:42 +01:00
Mark Nellemann 0285bd09e8 Update links.
continuous-integration/drone/push Build is passing Details
2023-01-06 08:04:10 +01:00
22 changed files with 232 additions and 430 deletions

View File

@ -1,3 +1,10 @@
# Changelog
All notable changes to this project will be documented in this file.
## 0.1.2 - 2023-08-08
- Updated 3rd party dependencies
## 0.1.1 - 2023-05-20
- Support for InfluxDB v2, now requires InfluxDB 1.8 or later

166
README.md
View File

@ -1,165 +1,3 @@
# Spectrum Virtualize Insights
# Repository moved
**SVCi** is a utility that collects metrics from one or more *IBM SAN Volume Controllers*. The metric data is processed and saved into an InfluxDB time-series database. Grafana is used to visualize the metrics data from InfluxDB through provided dashboards, or your own customized dashboards.
This software is free to use and is licensed under the [Apache 2.0 License](LICENSE), but is not supported or endorsed by International Business Machines (IBM).
![architecture](doc/SVCi.png)
Some of my other related projects are:
- [hmci](https://bitbucket.org/mnellemann/hmci) for agent-less monitoring of IBM Power servers
- [sysmon](https://git.data.coop/nellemann/sysmon) for monitoring all types of servers with a small Java agent
- [syslogd](https://git.data.coop/nellemann/syslogd) for redirecting syslog and GELF to remote logging destinations
## Installation and Setup
There are few steps in the installation.
1. Prepare your Spectrum Virtualize
2. Installation of InfluxDB and Grafana software
3. Installation and configuration of *SVC Insights* (SVCi)
4. Configure Grafana and import example dashboards
### 1 - Prepare Spectrum Virtualize
- Create a user with the "Monitor" role
### 2 - InfluxDB and Grafana Installation
Install InfluxDB (v. **1.8.x** or **1.9.x** for best compatibility with Grafana) on a host which is network accessible by the SVCi utility (the default InfluxDB port is 8086). You can install Grafana on the same server or any server which are able to connect to the InfluxDB database. The Grafana installation needs to be accessible from your browser (default on port 3000). The default settings for both InfluxDB and Grafana will work fine as a start.
- You can download [Grafana ppc64le](https://www.power-devops.com/grafana) and [InfluxDB ppc64le](https://www.power-devops.com/influxdb) packages for most Linux distributions and AIX on the [Power DevOps](https://www.power-devops.com/) site.
- Binaries for amd64/x86 are available from the [Grafana website](https://grafana.com/grafana/download) (select the **OSS variant**) and [InfluxDB website](https://portal.influxdata.com/downloads/) and most likely directly from your Linux distributions repositories.
- Create the empty *svci* database by running the **influx** CLI command and type:
```text
CREATE DATABASE "svci" WITH DURATION 365d REPLICATION 1;
```
See the [Influx documentation](https://docs.influxdata.com/influxdb/v1.8/query_language/manage-database/#create-database) for more information on duration and replication.
### 3 - SVCi Installation & Configuration
Install *SVCi* on a host, that can connect to your SAN Volume Controllers (on port 7443), and is also allowed to connect to the InfluxDB service. This *can be* the same LPAR/VM as used for the InfluxDB installation.
- Ensure you have **correct date/time** and NTPd running to keep it accurate!
- The only requirement for **svci** is the Java runtime, version 8 (or later)
- Install **SVCi** from from [packages](https://git.data.coop/nellemann/-/packages/generic/svci/) (rpm, deb or jar) or build from source
- On RPM based systems: ```sudo rpm -ivh svci-x.y.z-n.noarch.rpm```
- On DEB based systems: ```sudo dpkg -i svci_x.y.z-n_all.deb```
- Copy the **/opt/svci/doc/svci.toml** configuration example into **/etc/svci.toml** and edit the configuration to suit your environment. The location of the configuration file can optionally be changed with the *--conf* option.
- Run the **/opt/svci/bin/svci** program in a shell, as a @reboot cron task or configure as a proper service - there are instructions in the [doc/](doc/) folder.
- When started, *svci* expects the InfluxDB database to exist already.
### 4 - Grafana Configuration
- Configure Grafana to use InfluxDB as a new datasource
- **NOTE:** set *Min time interval* depending on your SVCi *refresh* setting.
- Import example dashboards from [doc/dashboards/*.json](doc/dashboards/) into Grafana as a starting point and get creative making your own cool dashboards - please share anything useful :)
## Notes
### No data (or past/future data) shown in Grafana
This is most likely due to timezone, date and/or NTP not being configured correctly on the SAN Volune Controller and/or host running SVCi.
### Start InfluxDB and Grafana at boot (systemd compatible Linux)
```shell
systemctl enable influxdb
systemctl start influxdb
systemctl enable grafana-server
systemctl start grafana-server
```
### InfluxDB Retention Policy
Examples for changing the default InfluxDB retention policy for the svci database:
```text
ALTER RETENTION POLICY "autogen" ON "svci" DURATION 156w
ALTER RETENTION POLICY "autogen" ON "svci" DURATION 90d
```
### Upgrading SVCi
On RPM based systems (RedHat, Suse, CentOS), download the latest *svci-x.y.z-n.noarch.rpm* file and upgrade:
```shell
sudo rpm -Uvh svci-x.y.z-n.noarch.rpm
```
On DEB based systems (Debian, Ubuntu and derivatives), download the latest *svci_x.y.z-n_all.deb* file and upgrade:
```shell
sudo dpkg -i svci_x.y.z-n_all.deb
```
Restart the SVCi service on *systemd* based Linux systems:
```shell
systemctl restart svci
journalctl -f -u svci # to check log output
```
### AIX Notes
To install (or upgrade) on AIX, you need to pass the *--ignoreos* flag to the *rpm* command:
```shell
rpm -Uvh --ignoreos svci-x.y.z-n.noarch.rpm
```
## Screenshots
Screenshots of the provided Grafana dashboard can be found in the [doc/screenshots/](doc/screenshots) folder.
## Known problems
## Development Information
You need Java (JDK) version 8 or later to build svci.
### Build & Test
Use the gradle build tool, which will download all required dependencies:
```shell
./gradlew clean build
```
### Local Testing
#### InfluxDB
Start the InfluxDB container:
```shell
docker run --name=influxdb --rm -d -p 8086:8086 influxdb:1.8
```
Create the *svci* database:
```shell
docker exec -i influxdb influx -execute "CREATE DATABASE svci"
```
#### Grafana
Start the Grafana container, linking it to the InfluxDB container:
```shell
docker run --name grafana --link influxdb:influxdb --rm -d -p 3000:3000 grafana/grafana
```
Setup Grafana to connect to the InfluxDB container by defining a new datasource on URL *http://influxdb:8086* named *svci*.
Grafana dashboards can be imported from the *doc/dashboards/* folder.
Please visit [github.com/mnellemann/svci](https://github.com/mnellemann/svci)

View File

@ -2,13 +2,11 @@ plugins {
id 'java'
id 'groovy'
id 'application'
// Code coverage of tests
id 'jacoco'
id "net.nemerosa.versioning" version "2.15.1"
id "com.netflix.nebula.ospackage" version "11.4.0"
id "com.github.johnrengelman.shadow" version "7.1.2"
id "com.netflix.nebula.ospackage" version "10.0.0"
}
repositories {
@ -20,21 +18,19 @@ group = projectGroup
version = projectVersion
dependencies {
annotationProcessor 'info.picocli:picocli-codegen:4.7.0'
implementation 'info.picocli:picocli:4.7.0'
implementation 'org.influxdb:influxdb-java:2.23'
//implementation 'com.influxdb:influxdb-client-java:6.7.0'
implementation 'org.slf4j:slf4j-api:2.0.6'
implementation 'org.slf4j:slf4j-simple:2.0.6'
implementation 'com.squareup.okhttp3:okhttp:4.10.0' // Also used by InfluxDB Client
//implementation "org.eclipse.jetty:jetty-client:9.4.49.v20220914"
implementation 'com.fasterxml.jackson.core:jackson-databind:2.14.1'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.14.1'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-toml:2.14.1'
annotationProcessor 'info.picocli:picocli-codegen:4.7.5'
implementation 'info.picocli:picocli:4.7.5'
implementation 'com.influxdb:influxdb-client-java:6.10.0'
implementation 'org.slf4j:slf4j-api:2.0.9'
implementation 'org.slf4j:slf4j-simple:2.0.9'
implementation 'com.squareup.okhttp3:okhttp:4.11.0' // Also used by InfluxDB Client
implementation 'com.fasterxml.jackson.core:jackson-databind:2.15.3'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.15.3'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-toml:2.15.3'
testImplementation 'junit:junit:4.13.2'
testImplementation 'org.spockframework:spock-core:2.3-groovy-3.0'
testImplementation "org.mock-server:mockserver-netty-no-dependencies:5.14.0"
testImplementation 'org.spockframework:spock-core:2.3-groovy-4.0'
testImplementation "org.mock-server:mockserver-netty-no-dependencies:5.15.0"
}
application {

19
doc/TODO.md Normal file
View File

@ -0,0 +1,19 @@
# TODO
Extended stats
```shell
svctask stopstats
svctask startstats -interval 5
lsdumps -prefix /dumps/iostats
```
The files generated are written to the /dumps/iostats directory.
https://www.ibm.com/docs/en/flashsystem-5x00/8.4.x?topic=commands-startstats
https://www.ibm.com/support/pages/overview-svc-v510-performance-statistics

View File

@ -76,7 +76,7 @@
}
]
},
"description": "https://bitbucket.org/mnellemann/svci/",
"description": "https://git.data.coop/nellemann/svci/ - Metrics collected from IBM Spectrum Virtualize.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
@ -97,7 +97,7 @@
},
"id": 16,
"options": {
"content": "## Metrics collected from IBM Spectrum Virtualize\n \nFor more information: [bitbucket.org/mnellemann/svci](https://bitbucket.org/mnellemann/svci)\n ",
"content": "## Metrics collected from IBM Spectrum Virtualize\n \nFor more information visit: [git.data.coop/nellemann/svci](https://git.data.coop/nellemann/svci)\n ",
"mode": "markdown"
},
"pluginVersion": "9.1.6",
@ -4152,4 +4152,4 @@
"uid": "7R8LbzKV3",
"version": 4,
"weekStart": ""
}
}

View File

@ -2,8 +2,6 @@
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
More details are available in the [README.md](../README.md) file.
- Grafana and InfluxDB can be downloaded from the [Power DevOps](https://www.power-devops.com/) website - look under the *Monitor* section.
- Ensure Java (version 8 or later) is installed and available in your PATH.
@ -11,9 +9,10 @@ More details are available in the [README.md](../README.md) file.
## Download and Install svci
[Download](https://git.data.coop/nellemann/-/packages/generic/svci/) the latest version of SVCi packaged for rpm.
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci-0.0.1-1_all.rpm
rpm -i --ignoreos svci-0.0.1-1_all.rpm
rpm -ivh --ignoreos svci-0.0.3-1_all.rpm
cp /opt/svci/doc/svci.toml /etc/
```

View File

@ -1,15 +1,13 @@
# Instruction for Debian / Ubuntu Systems
# Instructions for Debian / Ubuntu Systems
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
More details are available in the [README.md](../README.md) file.
All commands should be run as root or through sudo.
## Install the Java Runtime from repository
```shell
apt-get install default-jre-headless
apt-get install default-jre-headless wget
```
@ -25,13 +23,17 @@ systemctl start influxdb
Run the ```influx``` cli command and create the *svci* database.
```sql
CREATE DATABASE "svci" WITH DURATION 365d REPLICATION 1;
```
## Download and Install Grafana
```shell
sudo apt-get install -y adduser libfontconfig1
wget https://dl.grafana.com/oss/release/grafana_9.1.3_amd64.deb
dpkg -i grafana_9.1.3_amd64.deb
apt-get install -y adduser libfontconfig1
wget https://dl.grafana.com/oss/release/grafana_9.1.7_amd64.deb
dpkg -i grafana_9.1.7_amd64.deb
systemctl daemon-reload
systemctl enable grafana-server
systemctl start grafana-server
@ -42,9 +44,11 @@ When logged in to Grafana (port 3000, admin/admin) create a datasource that poin
## Download and Install svci
[Download](https://git.data.coop/nellemann/-/packages/generic/svci/) the latest version of SVCi packaged for deb.
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci_0.0.1-1_all.deb
dpkg -i svci_0.0.1-1_all.deb
wget https://git.data.coop/api/packages/nellemann/generic/svci/v0.0.3/svci_0.0.3-1_all.deb
dpkg -i svci_0.0.3-1_all.deb
cp /opt/svci/doc/svci.toml /etc/
cp /opt/svci/doc/svci.service /etc/systemd/system/
systemctl daemon-reload

View File

@ -2,16 +2,14 @@
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
More details are available in the [README.md](../README.md) file. If you are running Linux on Power (ppc64le) you should look for ppc64le packages at the [Power DevOps](https://www.power-devops.com/) website.
All commands should be run as root or through sudo.
## Install the Java Runtime from repository
```shell
dnf install java-11-openjdk-headless
dnf install java-11-openjdk-headless wget
# or
yum install java-11-openjdk-headless
yum install java-11-openjdk-headless wget
```
@ -27,12 +25,15 @@ systemctl start influxdb
Run the ```influx``` cli command and create the *svci* database.
```sql
CREATE DATABASE "svci" WITH DURATION 365d REPLICATION 1;
```
## Download and Install Grafana
```shell
wget https://dl.grafana.com/oss/release/grafana-9.1.3-1.x86_64.rpm
rpm -ivh grafana-9.1.3-1.x86_64.rpm
wget https://dl.grafana.com/oss/release/grafana-9.1.7-1.x86_64.rpm
rpm -ivh grafana-9.1.7-1.x86_64.rpm
systemctl daemon-reload
systemctl enable grafana-server
systemctl start grafana-server
@ -44,13 +45,12 @@ When logged in to Grafana (port 3000, admin/admin) create a datasource that poin
## Download and Install svci
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci-0.0.1-1_all.rpm
rpm -ivh svci-0.0.1-1_all.rpm
wget https://git.data.coop/api/packages/nellemann/generic/svci/v0.0.3/svci-0.0.3-1.noarch.rpm
rpm -ivh svci-0.0.3-1_all.rpm
cp /opt/svci/doc/svci.toml /etc/
cp /opt/svci/doc/svci.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable svci
systemctl start svci
```
Now modify */etc/svci.toml* and test your setup by running ```/opt/svci/bin/svci -d``` manually and verify connection to SVC and InfluxDB. Afterwards start service with ```systemctl start svci``` .

View File

@ -9,7 +9,7 @@ All commands should be run as root or through sudo.
## Install the Java Runtime from repository
```shell
zypper install java-11-openjdk-headless
zypper install java-11-openjdk-headless wget
```
@ -25,12 +25,15 @@ systemctl start influxdb
Run the ```influx``` cli command and create the *svci* database.
```sql
CREATE DATABASE "svci" WITH DURATION 365d REPLICATION 1;
```
## Download and Install Grafana
```shell
wget https://dl.grafana.com/oss/release/grafana-9.1.3-1.x86_64.rpm
rpm -ivh --nodeps grafana-9.1.3-1.x86_64.rpm
wget https://dl.grafana.com/oss/release/grafana-9.1.7-1.x86_64.rpm
rpm -ivh --nodeps grafana-9.1.7-1.x86_64.rpm
systemctl daemon-reload
systemctl enable grafana-server
systemctl start grafana-server
@ -41,9 +44,11 @@ When logged in to Grafana (port 3000, admin/admin) create a datasource that poin
## Download and Install SVCi
[Download](https://git.data.coop/nellemann/-/packages/generic/svci/) the latest version of SVCi packaged for rpm.
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci-0.0.1-1_all.rpm
rpm -ivh svci-0.0.1-1_all.rpm
wget https://git.data.coop/api/packages/nellemann/generic/svci/v0.0.3/svci-0.0.3-1.noarch.rpm
rpm -ivh svci-0.0.3-1_all.rpm
cp /opt/svci/doc/svci.toml /etc/
cp /opt/svci/doc/svci.service /etc/systemd/system/
systemctl daemon-reload

View File

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

View File

@ -1,17 +1,41 @@
# SVCi Configuration
# Copy this file into /etc/svci.toml and customize it to your environment.
# InfluxDB to save metrics
###
### Define one InfluxDB to save metrics into
### There must be only one and it should be named [influx]
###
# InfluxDB v1.x example
#[influx]
#url = "http://localhost:8086"
#username = "root"
#password = ""
#database = "svci"
# InfluxDB v2.x example
[influx]
url = "http://localhost:8086"
username = "root"
password = ""
database = "svci"
org = "myOrg"
token = "rAnd0mT0k3nG3neRaT3dByInF1uxDb=="
bucket = "svci"
###
### Define one or more SVC's to query for metrics
### Each entry must be named [svc.<something-unique>]
###
###
### Define one or more SVC's to query for metrics
### Each entry must be named [svc.<something-unique>]
###
# SVC on our primary site
[svc.site1]
url = "https://10.10.10.12:7443"
url = "https://10.10.10.5:7443"
username = "superuser"
password = "password"
refresh = 30
trust = true # Ignore SSL cert. errors
refresh = 30 # How often to query SVC for data - in seconds
trust = true # Ignore SSL cert. errors (due to default self-signed cert.)

View File

@ -1,3 +1,3 @@
projectId = svci
projectGroup = biz.nellemann.svci
projectVersion = 0.0.3
projectVersion = 0.1.3

View File

@ -15,17 +15,19 @@
*/
package biz.nellemann.svci;
import biz.nellemann.svci.dto.toml.Configuration;
import com.fasterxml.jackson.dataformat.toml.TomlMapper;
import picocli.CommandLine;
import picocli.CommandLine.Option;
import picocli.CommandLine.Command;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import com.fasterxml.jackson.dataformat.toml.TomlMapper;
import biz.nellemann.svci.dto.toml.Configuration;
import picocli.CommandLine;
import picocli.CommandLine.Command;
import picocli.CommandLine.Option;
@Command(name = "svci",
mixinStandardHelpOptions = true,
versionProvider = biz.nellemann.svci.VersionProvider.class,
@ -94,7 +96,7 @@ public class Application implements Callable<Integer> {
}
influxClient.logoff();
} catch (Exception e) {
} catch (InterruptedException | IOException e) {
System.err.println(e.getMessage());
return 1;
}

View File

@ -4,6 +4,7 @@ import picocli.CommandLine;
public class DefaultProvider implements CommandLine.IDefaultValueProvider {
@Override
public String defaultValue(CommandLine.Model.ArgSpec argSpec) throws Exception {
if(argSpec.isOption()) {
switch (argSpec.paramLabel()) {

View File

@ -15,43 +15,56 @@
*/
package biz.nellemann.svci;
import biz.nellemann.svci.dto.toml.InfluxConfiguration;
import org.influxdb.BatchOptions;
import org.influxdb.InfluxDB;
import org.influxdb.InfluxDBFactory;
import org.influxdb.dto.Point;
import static java.lang.Thread.sleep;
import java.util.ArrayList;
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import com.influxdb.client.InfluxDBClient;
import com.influxdb.client.InfluxDBClientFactory;
import com.influxdb.client.WriteApi;
import com.influxdb.client.WriteOptions;
import com.influxdb.client.domain.WritePrecision;
import com.influxdb.client.write.Point;
import static java.lang.Thread.sleep;
import biz.nellemann.svci.dto.toml.InfluxConfiguration;
public final class InfluxClient {
private final static Logger log = LoggerFactory.getLogger(InfluxClient.class);
final private String url;
final private String username;
final private String password;
final private String database;
final private String org; // v2 only
final private String token;
final private String bucket; // Bucket in v2, Database in v1
private InfluxDB influxDB;
private InfluxDBClient influxDBClient;
private WriteApi writeApi;
InfluxClient(InfluxConfiguration config) {
this.url = config.url;
this.username = config.username;
this.password = config.password;
this.database = config.database;
if(config.org != null) {
this.org = config.org;
} else {
this.org = "svci"; // In InfluxDB 1.x, there is no concept of organization.
}
if(config.token != null) {
this.token = config.token;
} else {
this.token = config.username + ":" + config.password;
}
if(config.bucket != null) {
this.bucket = config.bucket;
} else {
this.bucket = config.database;
}
}
synchronized void login() throws RuntimeException, InterruptedException {
if(influxDB != null) {
if(influxDBClient != null) {
return;
}
@ -61,20 +74,20 @@ public final class InfluxClient {
do {
try {
log.debug("Connecting to InfluxDB - {}", url);
influxDB = InfluxDBFactory.connect(url, username, password).setDatabase(database);
influxDB.version(); // This ensures that we actually try to connect to the db
influxDBClient = InfluxDBClientFactory.create(url, token.toCharArray(), org, bucket);
influxDBClient.version(); // This ensures that we actually try to connect to the db
Runtime.getRuntime().addShutdownHook(new Thread(influxDBClient::close));
influxDB.enableBatch(
BatchOptions.DEFAULTS
.threadFactory(runnable -> {
Thread thread = new Thread(runnable);
thread.setDaemon(true);
return thread;
})
);
Runtime.getRuntime().addShutdownHook(new Thread(influxDB::close));
// Todo: Handle events - https://github.com/influxdata/influxdb-client-java/tree/master/client#handle-the-events
//writeApi = influxDBClient.makeWriteApi();
writeApi = influxDBClient.makeWriteApi(
WriteOptions.builder()
.bufferLimit(20_000)
.flushInterval(5_000)
.build());
connected = true;
} catch(Exception e) {
sleep(15 * 1000);
if(loginErrors++ > 3) {
@ -90,29 +103,32 @@ public final class InfluxClient {
synchronized void logoff() {
if(influxDB != null) {
influxDB.close();
if(influxDBClient != null) {
influxDBClient.close();
}
influxDB = null;
influxDBClient = null;
}
public void write(List<Measurement> measurements, Instant timestamp, String measurement) {
log.debug("write() - measurement: {} {}", measurement, measurements.size());
processMeasurementMap(measurements, timestamp, measurement).forEach( (point) -> { influxDB.write(point); });
public void write(List<Measurement> measurements, String name) {
log.debug("write() - measurement: {} {}", name, measurements.size());
if(!measurements.isEmpty()) {
processMeasurementMap(measurements, name).forEach((point) -> {
writeApi.writePoint(point);
});
}
}
private List<Point> processMeasurementMap(List<Measurement> measurements, Instant timestamp, String measurement) {
private List<Point> processMeasurementMap(List<Measurement> measurements, String name) {
List<Point> listOfPoints = new ArrayList<>();
measurements.forEach( (m) -> {
Point.Builder builder = Point.measurement(measurement)
.time(timestamp.getEpochSecond(), TimeUnit.SECONDS)
.tag(m.tags)
.fields(m.fields);
listOfPoints.add(builder.build());
log.trace("processMeasurementMap() - timestamp: {}, tags: {}, fields: {}", m.timestamp, m.tags, m.fields);
Point point = new Point(name)
.time(m.timestamp.getEpochSecond(), WritePrecision.S)
.addTags(m.tags)
.addFields(m.fields);
listOfPoints.add(point);
});
return listOfPoints;
}

View File

@ -15,14 +15,23 @@
*/
package biz.nellemann.svci;
import java.time.Instant;
import java.util.Map;
public class Measurement {
final Instant timestamp;
final Map<String, String> tags;
final Map<String, Object> fields;
Measurement(Map<String, String> tags, Map<String, Object> fields) {
this.timestamp = Instant.now();
this.tags = tags;
this.fields = fields;
}
Measurement(Instant timestamp, Map<String, String> tags, Map<String, Object> fields) {
this.timestamp = timestamp;
this.tags = tags;
this.fields = fields;
}

View File

@ -1,17 +1,7 @@
package biz.nellemann.svci;
import biz.nellemann.svci.dto.json.AuthResponse;
import com.fasterxml.jackson.databind.ObjectMapper;
import okhttp3.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import java.io.*;
import java.net.*;
import java.io.IOException;
import java.net.URL;
import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
@ -19,6 +9,23 @@ import java.security.cert.X509Certificate;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.fasterxml.jackson.databind.ObjectMapper;
import biz.nellemann.svci.dto.json.AuthResponse;
import okhttp3.MediaType;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.RequestBody;
import okhttp3.Response;
public class RestClient {
private final static Logger log = LoggerFactory.getLogger(RestClient.class);
@ -63,7 +70,8 @@ public class RestClient {
.addHeader("X-Auth-Username", username)
.addHeader("X-Auth-Password", password)
//.put(RequestBody.create(payload.toString(), MEDIA_TYPE_IBM_XML_LOGIN))
.post(RequestBody.create("", MediaType.get("text/plain")))
//.post(RequestBody.create("", MediaType.get("text/plain")))
.post(RequestBody.create("", MediaType.parse("application/json")))
.build();
String responseBody;
@ -82,7 +90,7 @@ public class RestClient {
authToken = authResponse.token;
log.debug("logon() - auth token: {}", authToken);
} catch (Exception e) {
} catch (IOException e) {
log.warn("logon() - error: {}", e.getMessage());
}

View File

@ -15,14 +15,8 @@
*/
package biz.nellemann.svci;
import biz.nellemann.svci.dto.json.*;
import biz.nellemann.svci.dto.json.System;
import biz.nellemann.svci.dto.toml.SvcConfiguration;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import static java.lang.Thread.sleep;
import java.time.Duration;
import java.time.Instant;
import java.util.ArrayList;
@ -31,7 +25,17 @@ import java.util.HashMap;
import java.util.List;
import java.util.concurrent.atomic.AtomicBoolean;
import static java.lang.Thread.sleep;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.fasterxml.jackson.databind.ObjectMapper;
import biz.nellemann.svci.dto.json.EnclosureStat;
import biz.nellemann.svci.dto.json.MDiskGroup;
import biz.nellemann.svci.dto.json.NodeStat;
import biz.nellemann.svci.dto.json.System;
import biz.nellemann.svci.dto.json.VDisk;
import biz.nellemann.svci.dto.toml.SvcConfiguration;
class VolumeController implements Runnable {
@ -94,10 +98,10 @@ class VolumeController implements Runnable {
void refresh() {
log.debug("refresh()");
influxClient.write(getSystem(), Instant.now(),"system");
influxClient.write(getNodeStats(), Instant.now(),"node_stats");
influxClient.write(getEnclosureStats(), Instant.now(),"enclosure_stats");
influxClient.write(getMDiskGroups(), Instant.now(),"m_disk_groups");
influxClient.write(getSystem(),"system");
influxClient.write(getNodeStats(),"node_stats");
influxClient.write(getEnclosureStats(),"enclosure_stats");
influxClient.write(getMDiskGroups(), "m_disk_groups");
}

View File

@ -3,6 +3,10 @@ package biz.nellemann.svci.dto.toml;
public class InfluxConfiguration {
public String url;
public String org;
public String token;
public String bucket;
public String username;
public String password;
public String database;

View File

@ -1,22 +0,0 @@
package biz.nellemann.svci
import biz.nellemann.svci.dto.toml.InfluxConfiguration
import spock.lang.Ignore
import spock.lang.Specification
@Ignore
class InfluxClientTest extends Specification {
InfluxClient influxClient
def setup() {
influxClient = new InfluxClient(new InfluxConfiguration("http://localhost:8086", "root", "", "svci"))
influxClient.login()
}
def cleanup() {
influxClient.logoff()
}
}

View File

@ -1,112 +0,0 @@
package biz.nellemann.svci
import org.mockserver.integration.ClientAndServer
import org.mockserver.logging.MockServerLogger
import org.mockserver.socket.PortFactory
import org.mockserver.socket.tls.KeyStoreFactory
import spock.lang.Ignore
import spock.lang.Shared
import spock.lang.Specification
import javax.net.ssl.HttpsURLConnection
@Ignore
class VolumeControllerTest extends Specification {
@Shared
private static ClientAndServer mockServer;
@Shared
private RestClient serviceClient
@Shared
private VolumeController volumeController
@Shared
private File metricsFile
def setupSpec() {
HttpsURLConnection.setDefaultSSLSocketFactory(new KeyStoreFactory(new MockServerLogger()).sslContext().getSocketFactory());
mockServer = ClientAndServer.startClientAndServer(PortFactory.findFreePort());
serviceClient = new RestClient(String.format("http://localhost:%d", mockServer.getPort()), "user", "password", false)
MockResponses.prepareClientResponseForLogin(mockServer)
//MockResponses.prepareClientResponseForManagementConsole(mockServer)
//MockResponses.prepareClientResponseForManagedSystem(mockServer)
//MockResponses.prepareClientResponseForVirtualIOServer(mockServer)
//MockResponses.prepareClientResponseForLogicalPartition(mockServer)
serviceClient.login()
volumeController = new VolumeController(serviceClient, );
volumeController.discover()
}
def cleanupSpec() {
mockServer.stop()
}
def setup() {
}
def "test we got entry"() {
expect:
volumeController.entry.getName() == "Server-9009-42A-SN21F64EV"
}
void "test getDetails"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getDetails()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().tags['servername'] == 'Server-9009-42A-SN21F64EV'
listOfMeasurements.first().fields['utilizedProcUnits'] == 0.00458
listOfMeasurements.first().fields['assignedMem'] == 40448.0
}
void "test getMemoryMetrics"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getMemoryMetrics()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().fields['totalMem'] == 1048576.000
}
void "test getProcessorMetrics"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getProcessorMetrics()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().fields['availableProcUnits'] == 4.65
}
void "test getSystemSharedProcessorPools"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getSharedProcessorPools()
then:
listOfMeasurements.size() == 4
listOfMeasurements.first().fields['assignedProcUnits'] == 22.00013
}
void "test getPhysicalProcessorPool"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getPhysicalProcessorPool()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().fields['assignedProcUnits'] == 22.0
}
}