Initial commit of working code

This commit is contained in:
Mark Nellemann 2022-11-28 14:56:34 +01:00
commit 24c5fb78d2
53 changed files with 5586 additions and 0 deletions

11
.editorconfig Normal file
View file

@ -0,0 +1,11 @@
root = true
[*]
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
indent_style = space
indent_size = 4
[*.{yml,json}]
indent_size = 2

6
.gitattributes vendored Normal file
View file

@ -0,0 +1,6 @@
#
# https://help.github.com/articles/dealing-with-line-endings/
#
# These are explicitly windows files and should use crlf
*.bat text eol=crlf

8
.gitignore vendored Normal file
View file

@ -0,0 +1,8 @@
.idea
.vscode
.gradle
.project
.classpath
.settings
bin
build

3
CHANGELOG.md Normal file
View file

@ -0,0 +1,3 @@
# Changelog
All notable changes to this project will be documented in this file.

202
LICENSE Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

149
README.md Normal file
View file

@ -0,0 +1,149 @@
# SVC Insights
**SVCi** is a utility that collects metrics from one or more *IBM SAN Volume Controllers*. The metric data is processed and saved into an InfluxDB time-series database. Grafana is used to visualize the metrics data from InfluxDB through provided dashboards, or your own customized dashboards.
This software is free to use and is licensed under the [Apache 2.0 License](https://bitbucket.org/mnellemann/svci/src/master/LICENSE), but is not supported or endorsed by International Business Machines (IBM).
![architecture](doc/SVCi.png)
## Installation and Setup
There are few steps in the installation.
1. Installation of InfluxDB and Grafana software
2. Installation and configuration of *SVC Insights* (SVCi)
3. Configure Grafana and import example dashboards
### 1 - InfluxDB and Grafana Installation
Install InfluxDB (v. **1.8.x** or **1.9.x** for best compatibility with Grafana) on a host which is network accessible by the SVCi utility (the default InfluxDB port is 8086). You can install Grafana on the same server or any server which are able to connect to the InfluxDB database. The Grafana installation needs to be accessible from your browser (default on port 3000). The default settings for both InfluxDB and Grafana will work fine as a start.
- You can download [Grafana ppc64le](https://www.power-devops.com/grafana) and [InfluxDB ppc64le](https://www.power-devops.com/influxdb) packages for most Linux distributions and AIX on the [Power DevOps](https://www.power-devops.com/) site.
- Binaries for amd64/x86 are available from the [Grafana website](https://grafana.com/grafana/download) (select the **OSS variant**) and [InfluxDB website](https://portal.influxdata.com/downloads/) and most likely directly from your Linux distributions repositories.
- Create the empty *svci* database by running the **influx** CLI command and type:
```text
CREATE DATABASE "svci" WITH DURATION 365d REPLICATION 1;
```
See the [Influx documentation](https://docs.influxdata.com/influxdb/v1.8/query_language/manage-database/#create-database) for more information on duration and replication.
### 2 - SVCi Installation & Configuration
Install *SVCi* on a host, that can connect to your SAN Volume Controller (on port 7443), and is also allowed to connect to the InfluxDB service. This *can be* the same LPAR/VM as used for the InfluxDB installation.
- Ensure you have **correct date/time** and NTPd running to keep it accurate!
- The only requirement for **svci** is the Java runtime, version 8 (or later)
- Install **SVCi** from [downloads](https://bitbucket.org/mnellemann/svci/downloads/) (rpm, deb or jar) or build from source
- On RPM based systems: ```sudo rpm -ivh svci-x.y.z-n.noarch.rpm```
- On DEB based systems: ```sudo dpkg -i svci_x.y.z-n_all.deb```
- Copy the **/opt/svci/doc/svci.toml** configuration example into **/etc/svci.toml** and edit the configuration to suit your environment. The location of the configuration file can optionally be changed with the *--conf* option.
- Run the **/opt/svci/bin/svci** program in a shell, as a @reboot cron task or configure as a proper service - there are instructions in the [doc/readme-service.md](doc/readme-service.md) file.
- When started, *svci* expects the InfluxDB database to exist already.
### 3 - Grafana Configuration
- Configure Grafana to use InfluxDB as a new datasource
- **NOTE:** set *Min time interval* to *30s* or *1m* depending on your SVCi *update* setting.
- Import example dashboards from [doc/dashboards/*.json](doc/dashboards/) into Grafana as a starting point and get creative making your own cool dashboards - please share anything useful :)
## Notes
### No data (or past/future data) shown in Grafana
This is most likely due to timezone, date and/or NTP not being configured correctly on the SAN Volune Controller and/or host running SVCi.
### Start InfluxDB and Grafana at boot (systemd compatible Linux)
```shell
systemctl enable influxdb
systemctl start influxdb
systemctl enable grafana-server
systemctl start grafana-server
```
### InfluxDB Retention Policy
Examples for changing the default InfluxDB retention policy for the svci database:
```text
ALTER RETENTION POLICY "autogen" ON "svci" DURATION 156w
ALTER RETENTION POLICY "autogen" ON "svci" DURATION 90d
```
### Upgrading SVCi
On RPM based systems (RedHat, Suse, CentOS), download the latest *svci-x.y.z-n.noarch.rpm* file and upgrade:
```shell
sudo rpm -Uvh svci-x.y.z-n.noarch.rpm
```
On DEB based systems (Debian, Ubuntu and derivatives), download the latest *svci_x.y.z-n_all.deb* file and upgrade:
```shell
sudo dpkg -i svci_x.y.z-n_all.deb
```
Restart the SVCi service on *systemd* based Linux systems:
```shell
systemctl restart svci
journalctl -f -u svci # to check log output
```
### AIX Notes
To install (or upgrade) on AIX, you need to pass the *--ignoreos* flag to the *rpm* command:
```shell
rpm -Uvh --ignoreos svci-x.y.z-n.noarch.rpm
```
## Known problems
## Development Information
You need Java (JDK) version 8 or later to build svci.
### Build & Test
Use the gradle build tool, which will download all required dependencies:
```shell
./gradlew clean build
```
### Local Testing
#### InfluxDB
Start the InfluxDB container:
```shell
docker run --name=influxdb --rm -d -p 8086:8086 influxdb:1.8
```
Create the *svci* database:
```shell
docker exec -i influxdb influx -execute "CREATE DATABASE svci"
```
#### Grafana
Start the Grafana container, linking it to the InfluxDB container:
```shell
docker run --name grafana --link influxdb:influxdb --rm -d -p 3000:3000 grafana/grafana
```
Setup Grafana to connect to the InfluxDB container by defining a new datasource on URL *http://influxdb:8086* named *svci*.
Grafana dashboards can be imported from the *doc/dashboards/* folder.

22
bitbucket-pipelines.yml Normal file
View file

@ -0,0 +1,22 @@
image: eclipse-temurin:8-jdk
pipelines:
branches:
master:
- step:
caches:
- gradle
name: Build and Test
script:
- ./gradlew clean build
tags: # add the 'tags' section
v*: # specify the tag
- step: # define the build pipeline for the tag
caches:
- gradle
name: Build and Release
script:
- ./gradlew clean build shadowJar startShadowScripts buildRpm buildDeb
- shopt -s nullglob ; for file in ${BITBUCKET_CLONE_DIR}/build/libs/*-all.jar ; do curl -X POST --user "${BB_AUTH_STRING}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=@"${file}" ; done
- shopt -s nullglob ; for file in ${BITBUCKET_CLONE_DIR}/build/distributions/*.rpm ; do curl -X POST --user "${BB_AUTH_STRING}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=@"${file}" ; done
- shopt -s nullglob ; for file in ${BITBUCKET_CLONE_DIR}/build/distributions/*.deb ; do curl -X POST --user "${BB_AUTH_STRING}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=@"${file}" ; done

136
build.gradle Normal file
View file

@ -0,0 +1,136 @@
plugins {
id 'java'
id 'groovy'
id 'application'
// Code coverage of tests
id 'jacoco'
id "com.github.johnrengelman.shadow" version "7.1.2"
id "net.nemerosa.versioning" version "2.15.1"
id "nebula.ospackage" version "9.1.1"
}
repositories {
mavenCentral()
mavenLocal()
}
group = projectGroup
version = projectVersion
dependencies {
annotationProcessor 'info.picocli:picocli-codegen:4.7.0'
implementation 'info.picocli:picocli:4.7.0'
implementation 'org.influxdb:influxdb-java:2.23'
//implementation 'com.influxdb:influxdb-client-java:6.7.0'
implementation 'org.slf4j:slf4j-api:2.0.4'
implementation 'org.slf4j:slf4j-simple:2.0.4'
implementation 'com.squareup.okhttp3:okhttp:4.10.0' // Also used by InfluxDB Client
//implementation "org.eclipse.jetty:jetty-client:9.4.49.v20220914"
implementation 'com.fasterxml.jackson.core:jackson-databind:2.14.1'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.14.1'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-toml:2.14.1'
testImplementation 'junit:junit:4.13.2'
testImplementation 'org.spockframework:spock-core:2.3-groovy-3.0'
testImplementation "org.mock-server:mockserver-netty-no-dependencies:5.14.0"
}
application {
mainClass.set('biz.nellemann.svci.Application')
applicationDefaultJvmArgs = [ "-server", "-Xms64m", "-Xmx64m", "-XX:+UseG1GC", "-XX:+ExitOnOutOfMemoryError", "-XX:+AlwaysPreTouch" ]
}
java {
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8
}
test {
useJUnitPlatform()
}
apply plugin: 'nebula.ospackage'
ospackage {
packageName = 'svci'
release = '1'
user = 'root'
packager = "Mark Nellemann <mark.nellemann@gmail.com>"
into '/opt/svci'
from(shadowJar.outputs.files) {
into 'lib'
}
from('build/scriptsShadow') {
into 'bin'
}
from('doc/') {
into 'doc'
}
from(['README.md', 'LICENSE']) {
into 'doc'
}
}
buildRpm {
dependsOn startShadowScripts
os = "LINUX"
}
buildDeb {
dependsOn startShadowScripts
}
jacoco {
toolVersion = "0.8.8"
}
jacocoTestReport {
group = "verification"
reports {
xml.required = false
csv.required = false
html.destination file("${buildDir}/reports/coverage")
}
}
test.finalizedBy jacocoTestReport
jacocoTestCoverageVerification {
violationRules {
rule {
limit {
minimum = 0.1
}
}
}
}
check.dependsOn jacocoTestCoverageVerification
jar {
manifest {
attributes(
'Created-By' : "Gradle ${gradle.gradleVersion}",
'Build-OS' : "${System.properties['os.name']} ${System.properties['os.arch']} ${System.properties['os.version']}",
'Build-Jdk' : "${System.properties['java.version']} (${System.properties['java.vendor']} ${System.properties['java.vm.version']})",
'Build-User' : System.properties['user.name'],
'Build-Version' : versioning.info.tag ?: (versioning.info.branch + "-" + versioning.info.build),
'Build-Revision' : versioning.info.commit,
'Build-Timestamp': new Date().format("yyyy-MM-dd'T'HH:mm:ss.SSSZ").toString(),
'Add-Opens' : 'java.base/java.lang.invoke' // To ignore "Illegal reflective access by retrofit2.Platform" warnings
)
}
}
tasks.create("packages") {
group "build"
dependsOn ":build"
dependsOn ":buildDeb"
dependsOn ":buildRpm"
}

1
doc/SVCi.drawio Normal file

File diff suppressed because one or more lines are too long

BIN
doc/SVCi.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

File diff suppressed because it is too large Load diff

21
doc/readme-aix.md Normal file
View file

@ -0,0 +1,21 @@
# Instructions for AIX Systems
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
More details are available in the [README.md](../README.md) file.
- Grafana and InfluxDB can be downloaded from the [Power DevOps](https://www.power-devops.com/) website - look under the *Monitor* section.
- Ensure Java (version 8 or later) is installed and available in your PATH.
## Download and Install svci
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci-0.0.1-1_all.rpm
rpm -i --ignoreos svci-0.0.1-1_all.rpm
cp /opt/svci/doc/svci.toml /etc/
```
Now modify */etc/svci.toml* and test your setup by running ```/opt/svci/bin/svci -d```

54
doc/readme-debian.md Normal file
View file

@ -0,0 +1,54 @@
# Instruction for Debian / Ubuntu Systems
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
More details are available in the [README.md](../README.md) file.
All commands should be run as root or through sudo.
## Install the Java Runtime from repository
```shell
apt-get install default-jre-headless
```
## Download and Install InfluxDB
```shell
wget https://dl.influxdata.com/influxdb/releases/influxdb_1.8.10_amd64.deb
dpkg -i influxdb_1.8.10_amd64.deb
systemctl daemon-reload
systemctl enable influxdb
systemctl start influxdb
```
Run the ```influx``` cli command and create the *svci* database.
## Download and Install Grafana
```shell
sudo apt-get install -y adduser libfontconfig1
wget https://dl.grafana.com/oss/release/grafana_9.1.3_amd64.deb
dpkg -i grafana_9.1.3_amd64.deb
systemctl daemon-reload
systemctl enable grafana-server
systemctl start grafana-server
```
When logged in to Grafana (port 3000, admin/admin) create a datasource that points to the local InfluxDB. Now import the provided dashboards.
## Download and Install svci
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci_0.0.1-1_all.deb
dpkg -i svci_0.0.1-1_all.deb
cp /opt/svci/doc/svci.toml /etc/
cp /opt/svci/doc/svci.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable svci
```
Now modify */etc/svci.toml* and test setup by running ```/opt/svci/bin/svci -d``` manually and verify connection to SVC and InfluxDB. Afterwards start service with ```systemctl start svci``` .

14
doc/readme-firewall.md Normal file
View file

@ -0,0 +1,14 @@
# Firewall Notes
## RedHat, CentOS, Rocky & Alma Linux
And any other Linux distribution using *firewalld*.
All commands should be run as root or through sudo.
### Allow remote access to Grafana on port 3000
```shell
firewall-cmd --zone=public --add-port=3000/tcp --permanent
firewall-cmd --reload
```

56
doc/readme-redhat.md Normal file
View file

@ -0,0 +1,56 @@
# Instruction for RedHat / CentOS / AlmaLinux Systems
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
More details are available in the [README.md](../README.md) file. If you are running Linux on Power (ppc64le) you should look for ppc64le packages at the [Power DevOps](https://www.power-devops.com/) website.
All commands should be run as root or through sudo.
## Install the Java Runtime from repository
```shell
dnf install java-11-openjdk-headless
# or
yum install java-11-openjdk-headless
```
## Download and Install InfluxDB
```shell
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.10.x86_64.rpm
rpm -ivh influxdb-1.8.10.x86_64.rpm
systemctl daemon-reload
systemctl enable influxdb
systemctl start influxdb
```
Run the ```influx``` cli command and create the *svci* database.
## Download and Install Grafana
```shell
wget https://dl.grafana.com/oss/release/grafana-9.1.3-1.x86_64.rpm
rpm -ivh grafana-9.1.3-1.x86_64.rpm
systemctl daemon-reload
systemctl enable grafana-server
systemctl start grafana-server
```
When logged in to Grafana (port 3000, admin/admin) create a datasource that points to the local InfluxDB. Now import the provided dashboards.
## Download and Install svci
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci-0.0.1-1_all.rpm
rpm -ivh svci-0.0.1-1_all.rpm
cp /opt/svci/doc/svci.toml /etc/
cp /opt/svci/doc/svci.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable svci
systemctl start svci
```
Now modify */etc/svci.toml* and test your setup by running ```/opt/svci/bin/svci -d``` manually and verify connection to SVC and InfluxDB. Afterwards start service with ```systemctl start svci``` .

53
doc/readme-suse.md Normal file
View file

@ -0,0 +1,53 @@
# Instruction for SLES / OpenSUSE Systems
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
More details are available in the [README.md](../README.md) file. If you are running Linux on Power (ppc64le) you should look for ppc64le packages at the [Power DevOps](https://www.power-devops.com/) website.
All commands should be run as root or through sudo.
## Install the Java Runtime from repository
```shell
zypper install java-11-openjdk-headless
```
## Download and Install InfluxDB
```shell
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.10.x86_64.rpm
rpm -ivh influxdb-1.8.10.x86_64.rpm
systemctl daemon-reload
systemctl enable influxdb
systemctl start influxdb
```
Run the ```influx``` cli command and create the *svci* database.
## Download and Install Grafana
```shell
wget https://dl.grafana.com/oss/release/grafana-9.1.3-1.x86_64.rpm
rpm -ivh --nodeps grafana-9.1.3-1.x86_64.rpm
systemctl daemon-reload
systemctl enable grafana-server
systemctl start grafana-server
```
When logged in to Grafana (port 3000, admin/admin) create a datasource that points to the local InfluxDB. Now import the provided dashboards.
## Download and Install SVCi
```shell
wget https://bitbucket.org/mnellemann/svci/downloads/svci-0.0.1-1_all.rpm
rpm -ivh svci-0.0.1-1_all.rpm
cp /opt/svci/doc/svci.toml /etc/
cp /opt/svci/doc/svci.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable svci
```
Now modify */etc/svci.toml* and test your setup by running ```/opt/svci/bin/svci -d``` manually and verify connection to SVC and InfluxDB. Afterwards start service with ```systemctl start svci``` .

12
doc/svci.service Normal file
View file

@ -0,0 +1,12 @@
[Unit]
Description=SVC Insights Service
[Service]
#User=nobody
#Group=nobody
TimeoutSec=20
Restart=on-failure
ExecStart=/opt/svci/bin/svci
[Install]
WantedBy=default.target

18
doc/svci.toml Normal file
View file

@ -0,0 +1,18 @@
# SVCi Configuration
# InfluxDB to save metrics
[influx]
url = "http://localhost:8086"
username = "root"
password = ""
database = "svci"
# SVC on our primary site
[svc.site1]
url = "https://10.10.10.12:7443"
username = "superuser"
password = "password"
refresh = 10
discover = 120
trust = true # Ignore SSL cert. errors

3
gradle.properties Normal file
View file

@ -0,0 +1,3 @@
projectId = svci
projectGroup = biz.nellemann.svci
projectVersion = 0.0.1

BIN
gradle/wrapper/gradle-wrapper.jar vendored Normal file

Binary file not shown.

View file

@ -0,0 +1,5 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-7.5.1-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

234
gradlew vendored Executable file
View file

@ -0,0 +1,234 @@
#!/bin/sh
#
# Copyright © 2015-2021 the original authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
##############################################################################
#
# Gradle start up script for POSIX generated by Gradle.
#
# Important for running:
#
# (1) You need a POSIX-compliant shell to run this script. If your /bin/sh is
# noncompliant, but you have some other compliant shell such as ksh or
# bash, then to run this script, type that shell name before the whole
# command line, like:
#
# ksh Gradle
#
# Busybox and similar reduced shells will NOT work, because this script
# requires all of these POSIX shell features:
# * functions;
# * expansions «$var», «${var}», «${var:-default}», «${var+SET}»,
# «${var#prefix}», «${var%suffix}», and «$( cmd )»;
# * compound commands having a testable exit status, especially «case»;
# * various built-in commands including «command», «set», and «ulimit».
#
# Important for patching:
#
# (2) This script targets any POSIX shell, so it avoids extensions provided
# by Bash, Ksh, etc; in particular arrays are avoided.
#
# The "traditional" practice of packing multiple parameters into a
# space-separated string is a well documented source of bugs and security
# problems, so this is (mostly) avoided, by progressively accumulating
# options in "$@", and eventually passing that to Java.
#
# Where the inherited environment variables (DEFAULT_JVM_OPTS, JAVA_OPTS,
# and GRADLE_OPTS) rely on word-splitting, this is performed explicitly;
# see the in-line comments for details.
#
# There are tweaks for specific operating systems such as AIX, CygWin,
# Darwin, MinGW, and NonStop.
#
# (3) This script is generated from the Groovy template
# https://github.com/gradle/gradle/blob/master/subprojects/plugins/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt
# within the Gradle project.
#
# You can find Gradle at https://github.com/gradle/gradle/.
#
##############################################################################
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
app_path=$0
# Need this for daisy-chained symlinks.
while
APP_HOME=${app_path%"${app_path##*/}"} # leaves a trailing /; empty if no leading path
[ -h "$app_path" ]
do
ls=$( ls -ld "$app_path" )
link=${ls#*' -> '}
case $link in #(
/*) app_path=$link ;; #(
*) app_path=$APP_HOME$link ;;
esac
done
APP_HOME=$( cd "${APP_HOME:-./}" && pwd -P ) || exit
APP_NAME="Gradle"
APP_BASE_NAME=${0##*/}
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD=maximum
warn () {
echo "$*"
} >&2
die () {
echo
echo "$*"
echo
exit 1
} >&2
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
nonstop=false
case "$( uname )" in #(
CYGWIN* ) cygwin=true ;; #(
Darwin* ) darwin=true ;; #(
MSYS* | MINGW* ) msys=true ;; #(
NONSTOP* ) nonstop=true ;;
esac
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD=$JAVA_HOME/jre/sh/java
else
JAVACMD=$JAVA_HOME/bin/java
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD=java
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
# Increase the maximum file descriptors if we can.
if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then
case $MAX_FD in #(
max*)
MAX_FD=$( ulimit -H -n ) ||
warn "Could not query maximum file descriptor limit"
esac
case $MAX_FD in #(
'' | soft) :;; #(
*)
ulimit -n "$MAX_FD" ||
warn "Could not set maximum file descriptor limit to $MAX_FD"
esac
fi
# Collect all arguments for the java command, stacking in reverse order:
# * args from the command line
# * the main class name
# * -classpath
# * -D...appname settings
# * --module-path (only if needed)
# * DEFAULT_JVM_OPTS, JAVA_OPTS, and GRADLE_OPTS environment variables.
# For Cygwin or MSYS, switch paths to Windows format before running java
if "$cygwin" || "$msys" ; then
APP_HOME=$( cygpath --path --mixed "$APP_HOME" )
CLASSPATH=$( cygpath --path --mixed "$CLASSPATH" )
JAVACMD=$( cygpath --unix "$JAVACMD" )
# Now convert the arguments - kludge to limit ourselves to /bin/sh
for arg do
if
case $arg in #(
-*) false ;; # don't mess with options #(
/?*) t=${arg#/} t=/${t%%/*} # looks like a POSIX filepath
[ -e "$t" ] ;; #(
*) false ;;
esac
then
arg=$( cygpath --path --ignore --mixed "$arg" )
fi
# Roll the args list around exactly as many times as the number of
# args, so each arg winds up back in the position where it started, but
# possibly modified.
#
# NB: a `for` loop captures its iteration list before it begins, so
# changing the positional parameters here affects neither the number of
# iterations, nor the values presented in `arg`.
shift # remove old arg
set -- "$@" "$arg" # push replacement arg
done
fi
# Collect all arguments for the java command;
# * $DEFAULT_JVM_OPTS, $JAVA_OPTS, and $GRADLE_OPTS can contain fragments of
# shell script including quotes and variable substitutions, so put them in
# double quotes to make sure that they get re-expanded; and
# * put everything else in single quotes, so that it's not re-expanded.
set -- \
"-Dorg.gradle.appname=$APP_BASE_NAME" \
-classpath "$CLASSPATH" \
org.gradle.wrapper.GradleWrapperMain \
"$@"
# Use "xargs" to parse quoted args.
#
# With -n1 it outputs one arg per line, with the quotes and backslashes removed.
#
# In Bash we could simply go:
#
# readarray ARGS < <( xargs -n1 <<<"$var" ) &&
# set -- "${ARGS[@]}" "$@"
#
# but POSIX shell has neither arrays nor command substitution, so instead we
# post-process each arg (as a line of input to sed) to backslash-escape any
# character that might be a shell metacharacter, then use eval to reverse
# that process (while maintaining the separation between arguments), and wrap
# the whole thing up as a single "set" statement.
#
# This will of course break if any of these variables contains a newline or
# an unmatched quote.
#
eval "set -- $(
printf '%s\n' "$DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS" |
xargs -n1 |
sed ' s~[^-[:alnum:]+,./:=@_]~\\&~g; ' |
tr '\n' ' '
)" '"$@"'
exec "$JAVACMD" "$@"

89
gradlew.bat vendored Normal file
View file

@ -0,0 +1,89 @@
@rem
@rem Copyright 2015 the original author or authors.
@rem
@rem Licensed under the Apache License, Version 2.0 (the "License");
@rem you may not use this file except in compliance with the License.
@rem You may obtain a copy of the License at
@rem
@rem https://www.apache.org/licenses/LICENSE-2.0
@rem
@rem Unless required by applicable law or agreed to in writing, software
@rem distributed under the License is distributed on an "AS IS" BASIS,
@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@rem See the License for the specific language governing permissions and
@rem limitations under the License.
@rem
@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Resolve any "." and ".." in APP_HOME to make it shorter.
for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if "%ERRORLEVEL%" == "0" goto execute
echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto execute
echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %*
:end
@rem End local scope for the variables with windows NT shell
if "%ERRORLEVEL%"=="0" goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
exit /b 1
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega

1
settings.gradle Normal file
View file

@ -0,0 +1 @@
rootProject.name = 'svci'

View file

@ -0,0 +1,105 @@
/*
Copyright 2022 mark.nellemann@gmail.com
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package biz.nellemann.svci;
import biz.nellemann.svci.dto.toml.Configuration;
import com.fasterxml.jackson.dataformat.toml.TomlMapper;
import picocli.CommandLine;
import picocli.CommandLine.Option;
import picocli.CommandLine.Command;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
@Command(name = "svci",
mixinStandardHelpOptions = true,
versionProvider = biz.nellemann.svci.VersionProvider.class,
defaultValueProvider = biz.nellemann.svci.DefaultProvider.class)
public class Application implements Callable<Integer> {
@Option(names = { "-c", "--conf" }, description = "Configuration file [default: ${DEFAULT-VALUE}].", paramLabel = "<file>")
private File configurationFile;
@Option(names = { "-d", "--debug" }, description = "Enable debugging [default: false].")
private boolean[] enableDebug = new boolean[0];
public static void main(String... args) {
int exitCode = new CommandLine(new Application()).execute(args);
System.exit(exitCode);
}
@Override
public Integer call() {
InfluxClient influxClient;
List<Thread> threadList = new ArrayList<>();
if(!configurationFile.exists()) {
System.err.println("Error - No configuration file found at: " + configurationFile.toString());
return -1;
}
switch (enableDebug.length) {
case 1:
System.setProperty("org.slf4j.simpleLogger.defaultLogLevel" , "DEBUG");
break;
case 2:
System.setProperty("org.slf4j.simpleLogger.defaultLogLevel ", "TRACE");
break;
}
try {
TomlMapper mapper = new TomlMapper();
Configuration configuration = mapper.readerFor(Configuration.class)
.readValue(configurationFile);
influxClient = new InfluxClient(configuration.influx);
influxClient.login();
if(configuration.svc == null || configuration.svc.size() < 1) {
return 0;
}
configuration.svc.forEach((key, value) -> {
try {
VolumeController volumeController = new VolumeController(value, influxClient);
Thread t = new Thread(volumeController);
t.setName(key);
t.start();
threadList.add(t);
} catch (Exception e) {
System.err.println(e.getMessage());
}
});
for (Thread thread : threadList) {
thread.join();
}
influxClient.logoff();
} catch (Exception e) {
System.err.println(e.getMessage());
return 1;
}
return 0;
}
}

View file

@ -0,0 +1,33 @@
package biz.nellemann.svci;
import picocli.CommandLine;
public class DefaultProvider implements CommandLine.IDefaultValueProvider {
public String defaultValue(CommandLine.Model.ArgSpec argSpec) throws Exception {
if(argSpec.isOption()) {
switch (argSpec.paramLabel()) {
case "<file>":
return getDefaultConfigFileLocation();
default:
return null;
}
}
return null;
}
private boolean isWindowsOperatingSystem() {
String os = System.getProperty("os.name");
return os.toLowerCase().startsWith("windows");
}
private String getDefaultConfigFileLocation() {
String configFilePath;
if(isWindowsOperatingSystem()) {
configFilePath = System.getProperty("user.home") + "\\svci.toml";
} else {
configFilePath = "/etc/svci.toml";
}
return configFilePath;
}
}

View file

@ -0,0 +1,119 @@
/*
* Copyright 2020 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.svci;
import biz.nellemann.svci.dto.toml.InfluxConfiguration;
import org.influxdb.BatchOptions;
import org.influxdb.InfluxDB;
import org.influxdb.InfluxDBFactory;
import org.influxdb.dto.Point;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import static java.lang.Thread.sleep;
public final class InfluxClient {
private final static Logger log = LoggerFactory.getLogger(InfluxClient.class);
final private String url;
final private String username;
final private String password;
final private String database;
private InfluxDB influxDB;
InfluxClient(InfluxConfiguration config) {
this.url = config.url;
this.username = config.username;
this.password = config.password;
this.database = config.database;
}
synchronized void login() throws RuntimeException, InterruptedException {
if(influxDB != null) {
return;
}
boolean connected = false;
int loginErrors = 0;
do {
try {
log.debug("Connecting to InfluxDB - {}", url);
influxDB = InfluxDBFactory.connect(url, username, password).setDatabase(database);
influxDB.version(); // This ensures that we actually try to connect to the db
influxDB.enableBatch(
BatchOptions.DEFAULTS
.threadFactory(runnable -> {
Thread thread = new Thread(runnable);
thread.setDaemon(true);
return thread;
})
);
Runtime.getRuntime().addShutdownHook(new Thread(influxDB::close));
connected = true;
} catch(Exception e) {
sleep(15 * 1000);
if(loginErrors++ > 3) {
log.error("login() - error, giving up: {}", e.getMessage());
throw new RuntimeException(e);
} else {
log.warn("login() - error, retrying: {}", e.getMessage());
}
}
} while(!connected);
}
synchronized void logoff() {
if(influxDB != null) {
influxDB.close();
}
influxDB = null;
}
public void write(List<Measurement> measurements, Instant timestamp, String measurement) {
log.debug("write() - measurement: {} {}", measurement, measurements.size());
processMeasurementMap(measurements, timestamp, measurement).forEach( (point) -> { influxDB.write(point); });
}
private List<Point> processMeasurementMap(List<Measurement> measurements, Instant timestamp, String measurement) {
List<Point> listOfPoints = new ArrayList<>();
measurements.forEach( (m) -> {
Point.Builder builder = Point.measurement(measurement)
.time(timestamp.toEpochMilli(), TimeUnit.MILLISECONDS)
.tag(m.tags)
.fields(m.fields);
listOfPoints.add(builder.build());
});
return listOfPoints;
}
}

View file

@ -0,0 +1,30 @@
/*
* Copyright 2022 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.svci;
import java.util.Map;
public class Measurement {
final Map<String, String> tags;
final Map<String, Object> fields;
Measurement(Map<String, String> tags, Map<String, Object> fields) {
this.tags = tags;
this.fields = fields;
}
}

View file

@ -0,0 +1,61 @@
package biz.nellemann.svci;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Instant;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeParseException;
public class Resource {
private final static Logger log = LoggerFactory.getLogger(Resource.class);
private final ObjectMapper objectMapper = new ObjectMapper();
Resource() {
objectMapper.enable(DeserializationFeature.UNWRAP_SINGLE_VALUE_ARRAYS);
objectMapper.enable(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY);
objectMapper.enable(DeserializationFeature.ACCEPT_EMPTY_STRING_AS_NULL_OBJECT);
}
void deserialize(String json) {
if(json == null || json.length() < 1) {
return;
}
try {
//ProcessedMetrics processedMetrics = objectMapper.readValue(json, ProcessedMetrics.class);
//metric = processedMetrics.systemUtil;
} catch (Exception e) {
log.error("deserialize() - error: {}", e.getMessage());
}
}
/*
Instant getTimestamp() {
Instant instant = Instant.now();
if (metric == null) {
return instant;
}
String timestamp = metric.getSample().sampleInfo.timestamp;
try {
log.trace("getTimeStamp() - PMC Timestamp: {}", timestamp);
DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss[XXX][X]");
instant = Instant.from(dateTimeFormatter.parse(timestamp));
log.trace("getTimestamp() - Instant: {}", instant.toString());
} catch(DateTimeParseException e) {
log.warn("getTimestamp() - parse error: {}", timestamp);
}
return instant;
}
*/
}

View file

@ -0,0 +1,237 @@
package biz.nellemann.svci;
import biz.nellemann.svci.dto.json.AuthResponse;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import okhttp3.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import java.io.*;
import java.net.*;
import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
public class RestClient {
private final static Logger log = LoggerFactory.getLogger(RestClient.class);
protected OkHttpClient httpClient;
// OkHttpClient timeouts
private final static int CONNECT_TIMEOUT = 30;
private final static int WRITE_TIMEOUT = 30;
private final static int READ_TIMEOUT = 180;
protected String authToken;
protected final String baseUrl;
protected final String username;
protected final String password;
public RestClient(String baseUrl, String username, String password, Boolean trustAll) {
this.baseUrl = baseUrl;
this.username = username;
this.password = password;
if (trustAll) {
this.httpClient = getUnsafeOkHttpClient();
} else {
this.httpClient = getSafeOkHttpClient();
}
}
/**
* Logon to the SVC and get an authentication token for further requests.
*/
public synchronized void login() {
log.info("Connecting to SVC - {} @ {}", username, baseUrl);
try {
URL url = new URL(String.format("%s/rest/v1/auth", baseUrl));
Request request = new Request.Builder()
.url(url)
.addHeader("X-Audit-Memento", "IBM Power HMC Insights")
.addHeader("X-Auth-Username", username)
.addHeader("X-Auth-Password", password)
//.put(RequestBody.create(payload.toString(), MEDIA_TYPE_IBM_XML_LOGIN))
.post(RequestBody.create("", MediaType.get("text/plain")))
.build();
String responseBody;
try (Response response = httpClient.newCall(request).execute()) {
responseBody = Objects.requireNonNull(response.body()).string();
if (!response.isSuccessful()) {
log.warn("login() - Unexpected response: {}", response.code());
throw new IOException("Unexpected code: " + response);
}
}
log.debug(responseBody);
ObjectMapper objectMapper = new ObjectMapper();
AuthResponse authResponse = objectMapper.readValue(responseBody, AuthResponse.class);
authToken = authResponse.token;
log.debug("logon() - auth token: {}", authToken);
} catch (Exception e) {
log.warn("logon() - error: {}", e.getMessage());
}
}
public String postRequest(String urlPath) throws IOException {
URL absUrl = new URL(String.format("%s%s", baseUrl, urlPath));
return postRequest(absUrl, null);
}
public String postRequest(String urlPath, String payload) throws IOException {
URL absUrl = new URL(String.format("%s%s", baseUrl, urlPath));
return postRequest(absUrl, payload);
}
/**
* Send a POST request with a payload (can be null) to the SVC
* @param url
* @param payload
* @return
* @throws IOException
*/
public synchronized String postRequest(URL url, String payload) throws IOException {
log.trace("postRequest() - URL: {}", url.toString());
RequestBody requestBody;
if(payload != null) {
requestBody = RequestBody.create(payload, MediaType.get("application/json"));
} else {
requestBody = RequestBody.create("", null);
}
Request request = new Request.Builder()
.url(url)
.addHeader("accept", "application/json")
.addHeader("Content-Type", "application/json")
.addHeader("X-Auth-Token", (authToken == null ? "" : authToken) )
.post(requestBody).build();
String responseBody;
try (Response response = httpClient.newCall(request).execute()) {
responseBody = Objects.requireNonNull(response.body()).string();
if (!response.isSuccessful()) {
if(response.code() == 401) {
log.warn("postRequest() - 401 - login and retry.");
// Let's login again and retry
login();
return retryPostRequest(url, payload);
}
log.warn(responseBody);
log.error("postRequest() - Unexpected response: {}", response.code());
throw new IOException("postRequest() - Unexpected response: " + response.code());
}
}
return responseBody;
}
private String retryPostRequest(URL url, String payload) throws IOException {
log.debug("retryPostRequest() - URL: {}", url.toString());
RequestBody requestBody;
if(payload != null) {
requestBody = RequestBody.create(payload, MediaType.get("application/json"));
} else {
requestBody = RequestBody.create("", null);
}
Request request = new Request.Builder()
.url(url)
.addHeader("accept", "application/json")
.addHeader("Content-Type", "application/json")
.addHeader("X-Auth-Token", (authToken == null ? "" : authToken) )
.post(requestBody).build();
String responseBody = null;
try (Response response = httpClient.newCall(request).execute()) {
if(response.isSuccessful()) {
responseBody = response.body().string();
}
}
return responseBody;
}
/**
* Provide an unsafe (ignoring SSL problems) OkHttpClient
*
* @return OkHttpClient ignoring SSL/TLS errors
*/
private static OkHttpClient getUnsafeOkHttpClient() {
try {
// Create a trust manager that does not validate certificate chains
final TrustManager[] trustAllCerts = new TrustManager[] {
new X509TrustManager() {
@Override
public void checkClientTrusted(X509Certificate[] chain, String authType) { }
@Override
public void checkServerTrusted(X509Certificate[] chain, String authType) {
}
@Override
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[]{};
}
}
};
// Install the all-trusting trust manager
final SSLContext sslContext = SSLContext.getInstance("SSL");
sslContext.init(null, trustAllCerts, new SecureRandom());
// Create a ssl socket factory with our all-trusting manager
final SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory();
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.sslSocketFactory(sslSocketFactory, (X509TrustManager)trustAllCerts[0]);
builder.hostnameVerifier((hostname, session) -> true);
builder.connectTimeout(CONNECT_TIMEOUT, TimeUnit.SECONDS);
builder.writeTimeout(WRITE_TIMEOUT, TimeUnit.SECONDS);
builder.readTimeout(READ_TIMEOUT, TimeUnit.SECONDS);
return builder.build();
} catch (KeyManagementException | NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
}
/**
* Get OkHttpClient with our preferred timeout values.
* @return OkHttpClient
*/
private static OkHttpClient getSafeOkHttpClient() {
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.connectTimeout(CONNECT_TIMEOUT, TimeUnit.SECONDS);
builder.writeTimeout(WRITE_TIMEOUT, TimeUnit.SECONDS);
builder.readTimeout(READ_TIMEOUT, TimeUnit.SECONDS);
return builder.build();
}
}

View file

@ -0,0 +1,35 @@
/*
* Copyright 2022 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.svci;
import picocli.CommandLine;
import java.io.IOException;
import java.util.jar.Attributes;
import java.util.jar.Manifest;
class VersionProvider implements CommandLine.IVersionProvider {
@Override
public String[] getVersion() throws IOException {
Manifest manifest = new Manifest(getClass().getResourceAsStream("/META-INF/MANIFEST.MF"));
Attributes attrs = manifest.getMainAttributes();
return new String[] { "${COMMAND-FULL-NAME} " + attrs.getValue("Build-Version") };
}
}

View file

@ -0,0 +1,226 @@
/*
* Copyright 2022 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.svci;
import biz.nellemann.svci.dto.json.EnclosureStat;
import biz.nellemann.svci.dto.json.NodeStat;
import biz.nellemann.svci.dto.json.System;
import biz.nellemann.svci.dto.toml.SvcConfiguration;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.time.Duration;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.concurrent.atomic.AtomicBoolean;
import static java.lang.Thread.sleep;
class VolumeController implements Runnable {
private final static Logger log = LoggerFactory.getLogger(VolumeController.class);
private final ObjectMapper objectMapper = new ObjectMapper();
private final Integer refreshValue;
private final Integer discoverValue;
//private final List<ManagedSystem> managedSystems = new ArrayList<>();
private final RestClient restClient;
private final InfluxClient influxClient;
private final AtomicBoolean keepRunning = new AtomicBoolean(true);
protected Integer responseErrors = 0;
protected System system;
VolumeController(SvcConfiguration configuration, InfluxClient influxClient) {
this.refreshValue = configuration.refresh;
this.discoverValue = configuration.discover;
this.influxClient = influxClient;
restClient = new RestClient(configuration.url, configuration.username, configuration.password, configuration.trust);
}
@Override
public void run() {
log.trace("run()");
restClient.login();
discover();
do {
Instant instantStart = Instant.now();
try {
refresh();
} catch (Exception e) {
log.error("run() - fatal error: {}", e.getMessage());
keepRunning.set(false);
throw new RuntimeException(e);
}
Instant instantEnd = Instant.now();
long timeSpend = Duration.between(instantStart, instantEnd).toMillis();
log.trace("run() - duration millis: " + timeSpend);
if(timeSpend < (refreshValue * 1000)) {
try {
long sleepTime = (refreshValue * 1000) - timeSpend;
log.trace("run() - sleeping millis: " + sleepTime);
if(sleepTime > 0) {
//noinspection BusyWait
sleep(sleepTime);
}
} catch (InterruptedException e) {
log.error("run() - sleep interrupted", e);
}
} else {
log.warn("run() - possible slow response from this HMC");
}
} while (keepRunning.get());
}
void discover() {
log.debug("discover()");
influxClient.write(getSystem(), Instant.now(),"system");
}
void refresh() {
log.debug("refresh()");
influxClient.write(getNodeStats(), Instant.now(),"node_stats");
influxClient.write(getEnclosureStats(), Instant.now(),"enclosure_stats");
}
List<Measurement> getSystem() {
List<Measurement> measurementList = new ArrayList<>();
try {
String response = restClient.postRequest("/rest/v1/lssystem");
// Do not try to parse empty response
if(response == null || response.length() <= 1) {
log.warn("getSystem() - no data.");
return measurementList;
}
// Save for use elsewhere when referring to system name
system = objectMapper.readValue(response, System.class);
HashMap<String, String> tagsMap = new HashMap<>();
HashMap<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("name", system.name);
fieldsMap.put("location", system.location);
fieldsMap.put("code_level", system.codeLevel);
fieldsMap.put("product_name", system.productName);
log.trace("getNodeStats() - fields: " + fieldsMap);
measurementList.add(new Measurement(tagsMap, fieldsMap));
} catch (IOException e) {
log.error("getSystem() - error 2: {}", e.getMessage());
}
return measurementList;
}
List<Measurement> getNodeStats() {
List<Measurement> measurementList = new ArrayList<>();
try {
String response = restClient.postRequest("/rest/v1/lsnodestats");
// Do not try to parse empty response
if(response == null || response.length() <= 1) {
log.warn("getNodeStats() - no data.");
return measurementList;
}
List<NodeStat> pojo = Arrays.asList(objectMapper.readValue(response, NodeStat[].class));
pojo.forEach((stat) -> {
HashMap<String, String> tagsMap = new HashMap<>();
HashMap<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("id", stat.nodeId);
tagsMap.put("name", stat.nodeName);
tagsMap.put("system", system.name);
fieldsMap.put(stat.statName, stat.statCurrent);
log.trace("getNodeStats() - fields: " + fieldsMap);
measurementList.add(new Measurement(tagsMap, fieldsMap));
//log.info("{}: {} -> {}", stat.nodeName, stat.statName, stat.statCurrent);
});
} catch (IOException e) {
log.error("getNodeStats() - error 2: {}", e.getMessage());
}
return measurementList;
}
List<Measurement> getEnclosureStats() {
List<Measurement> measurementList = new ArrayList<>();
try {
String response = restClient.postRequest("/rest/v1/lsenclosurestats");
// Do not try to parse empty response
if(response == null || response.length() <= 1) {
log.warn("getEnclosureStats() - no data.");
return measurementList;
}
List<EnclosureStat> pojo = Arrays.asList(objectMapper.readValue(response, EnclosureStat[].class));
pojo.forEach((stat) -> {
HashMap<String, String> tagsMap = new HashMap<>();
HashMap<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("id", stat.enclosureId);
tagsMap.put("system", system.name);
fieldsMap.put(stat.statName, stat.statCurrent);
log.trace("getEnclosureStats() - fields: " + fieldsMap);
measurementList.add(new Measurement(tagsMap, fieldsMap));
//log.info("{}: {} -> {}", stat.nodeName, stat.statName, stat.statCurrent);
});
} catch (IOException e) {
log.error("getEnclosureStats() - error 2: {}", e.getMessage());
}
return measurementList;
}
}

View file

@ -0,0 +1,7 @@
package biz.nellemann.svci.dto.json;
public class AuthResponse {
public String token;
}

View file

@ -0,0 +1,29 @@
package biz.nellemann.svci.dto.json;
import com.fasterxml.jackson.annotation.JsonProperty;
public class EnclosureStat {
@JsonProperty("enclosure_id")
public String enclosureId;
@JsonProperty("stat_name")
public String statName;
@JsonProperty("stat_current")
public Number statCurrent;
@JsonProperty("stat_peak")
public Number statPeak;
@JsonProperty("stat_peak_time")
public Number statPeakTime;
/*
"enclosure_id": "1",
"stat_name": "power_w",
"stat_current": "332",
"stat_peak": "333",
"stat_peak_time": "221126132328"
*/
}

View file

@ -0,0 +1,35 @@
package biz.nellemann.svci.dto.json;
import com.fasterxml.jackson.annotation.JsonProperty;
public class NodeStat {
@JsonProperty("node_id")
public String nodeId;
@JsonProperty("node_name")
public String nodeName;
@JsonProperty("stat_name")
public String statName;
@JsonProperty("stat_current")
public Number statCurrent;
@JsonProperty("stat_peak")
public Number statPeak;
@JsonProperty("stat_peak_time")
public Number statPeakTime;
/*
{
"node_id": "2",
"node_name": "node2",
"stat_name": "cloud_down_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
*/
}

View file

@ -0,0 +1,159 @@
package biz.nellemann.svci.dto.json;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
@JsonIgnoreProperties(ignoreUnknown = true)
public class System {
public String name;
public String location;
@JsonProperty("statistics_status")
public String statisticsStatus;
@JsonProperty("statistics_frequency")
public Number statisticsFrequency;
@JsonProperty("code_level")
public String codeLevel;
@JsonProperty("product_name")
public String productName;
/**
"id": "000001002100613E",
"name": "V7000_A2U12",
"location": "local",
"partnership": "",
"total_mdisk_capacity": "60.9TB",
"space_in_mdisk_grps": "60.9TB",
"space_allocated_to_vdisks": "2.87TB",
"total_free_space": "58.0TB",
"total_vdiskcopy_capacity": "20.42TB",
"total_used_capacity": "2.60TB",
"total_overallocation": "33",
"total_vdisk_capacity": "20.42TB",
"total_allocated_extent_capacity": "2.92TB",
"statistics_status": "on",
"statistics_frequency": "5",
"cluster_locale": "en_US",
"time_zone": "13 Africa/Casablanca",
"code_level": "8.4.2.0 (build 154.20.2109031944000)",
"console_IP": "10.32.64.182:443",
"id_alias": "000001002100613E",
"gm_link_tolerance": "300",
"gm_inter_cluster_delay_simulation": "0",
"gm_intra_cluster_delay_simulation": "0",
"gm_max_host_delay": "5",
"email_reply": "",
"email_contact": "",
"email_contact_primary": "",
"email_contact_alternate": "",
"email_contact_location": "",
"email_contact2": "",
"email_contact2_primary": "",
"email_contact2_alternate": "",
"email_state": "stopped",
"inventory_mail_interval": "0",
"cluster_ntp_IP_address": "",
"cluster_isns_IP_address": "",
"iscsi_auth_method": "none",
"iscsi_chap_secret": "",
"auth_service_configured": "no",
"auth_service_enabled": "no",
"auth_service_url": "",
"auth_service_user_name": "",
"auth_service_pwd_set": "no",
"auth_service_cert_set": "no",
"auth_service_type": "ldap",
"relationship_bandwidth_limit": "25",
"tiers": [
{
"tier": "tier_scm",
"tier_capacity": "0.00MB",
"tier_free_capacity": "0.00MB"
},
{
"tier": "tier0_flash",
"tier_capacity": "0.00MB",
"tier_free_capacity": "0.00MB"
},
{
"tier": "tier1_flash",
"tier_capacity": "49.17TB",
"tier_free_capacity": "46.25TB"
},
{
"tier": "tier_enterprise",
"tier_capacity": "11.74TB",
"tier_free_capacity": "11.74TB"
},
{
"tier": "tier_nearline",
"tier_capacity": "0.00MB",
"tier_free_capacity": "0.00MB"
}
],
"easy_tier_acceleration": "off",
"has_nas_key": "no",
"layer": "storage",
"rc_buffer_size": "256",
"compression_active": "no",
"compression_virtual_capacity": "0.00MB",
"compression_compressed_capacity": "0.00MB",
"compression_uncompressed_capacity": "0.00MB",
"cache_prefetch": "on",
"email_organization": "",
"email_machine_address": "",
"email_machine_city": "",
"email_machine_state": "XX",
"email_machine_zip": "",
"email_machine_country": "",
"total_drive_raw_capacity": "79.25TB",
"compression_destage_mode": "off",
"local_fc_port_mask": "1111111111111111111111111111111111111111111111111111111111111111",
"partner_fc_port_mask": "1111111111111111111111111111111111111111111111111111111111111111",
"high_temp_mode": "off",
"topology": "standard",
"topology_status": "",
"rc_auth_method": "none",
"vdisk_protection_time": "15",
"vdisk_protection_enabled": "yes",
"product_name": "IBM Storwize V7000",
"odx": "off",
"max_replication_delay": "0",
"partnership_exclusion_threshold": "315",
"gen1_compatibility_mode_enabled": "no",
"ibm_customer": "",
"ibm_component": "",
"ibm_country": "",
"tier_scm_compressed_data_used": "0.00MB",
"tier0_flash_compressed_data_used": "0.00MB",
"tier1_flash_compressed_data_used": "0.00MB",
"tier_enterprise_compressed_data_used": "0.00MB",
"tier_nearline_compressed_data_used": "0.00MB",
"total_reclaimable_capacity": "0.00MB",
"physical_capacity": "60.91TB",
"physical_free_capacity": "58.00TB",
"used_capacity_before_reduction": "0.00MB",
"used_capacity_after_reduction": "0.00MB",
"overhead_capacity": "0.00MB",
"deduplication_capacity_saving": "0.00MB",
"enhanced_callhome": "on",
"censor_callhome": "off",
"host_unmap": "off",
"backend_unmap": "on",
"quorum_mode": "standard",
"quorum_site_id": "",
"quorum_site_name": "",
"quorum_lease": "short",
"automatic_vdisk_analysis_enabled": "on",
"callhome_accepted_usage": "no",
"safeguarded_copy_suspended": "no"
*/
}

View file

@ -0,0 +1,12 @@
package biz.nellemann.svci.dto.toml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import java.util.Map;
@JsonIgnoreProperties(ignoreUnknown = true)
public class Configuration {
public InfluxConfiguration influx;
public Map<String, SvcConfiguration> svc;
}

View file

@ -0,0 +1,17 @@
package biz.nellemann.svci.dto.toml;
public class InfluxConfiguration {
public String url;
public String username;
public String password;
public String database;
/*public InfluxConfiguration(String url, String username, String password, String database) {
this.url = url;
this.username = username;
this.password = password;
this.database = database;
}*/
}

View file

@ -0,0 +1,17 @@
package biz.nellemann.svci.dto.toml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
@JsonIgnoreProperties(ignoreUnknown = true)
public class SvcConfiguration {
public String url;
public String username;
public String password;
public Integer refresh = 30;
public Integer discover = 120;
public Boolean trust = true;
}

1
src/main/resources/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
version.properties

View file

@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{16} - %msg%n</pattern>
</encoder>
</appender>
<logger name="biz.nellemann.hmci" level="INFO"/>
<root level="WARN">
<appender-ref ref="STDOUT" />
</root>
</configuration>

View file

@ -0,0 +1,6 @@
org.slf4j.simpleLogger.logFile=System.out
org.slf4j.simpleLogger.showDateTime=false
org.slf4j.simpleLogger.showShortLogName=true
org.slf4j.simpleLogger.dateTimeFormat=yyyy-MM-dd HH:mm:ss.SSS
org.slf4j.simpleLogger.levelInBrackets=true
org.slf4j.simpleLogger.defaultLogLevel=info

View file

@ -0,0 +1,42 @@
package biz.nellemann.svci
import biz.nellemann.svci.dto.toml.Configuration
import biz.nellemann.svci.dto.toml.SvcConfiguration
import com.fasterxml.jackson.dataformat.toml.TomlMapper
import spock.lang.Specification
import java.nio.file.Path
import java.nio.file.Paths
class ConfigurationTest extends Specification {
Path testConfigurationFile = Paths.get(getClass().getResource('/svci.toml').toURI())
TomlMapper mapper
def setup() {
mapper = new TomlMapper();
}
def cleanup() {
}
void "test parsing of configuration file"() {
when:
Configuration conf = mapper.readerFor(Configuration.class).readValue(testConfigurationFile.toFile())
println(conf.svc.entrySet().forEach((e) -> {
println((String)e.key + " -> " + e);
SvcConfiguration c = e.value;
println(c.url);
}));
then:
conf != null
}
}

View file

@ -0,0 +1,68 @@
package biz.nellemann.svci
import biz.nellemann.svci.dto.json.EnclosureStat
import biz.nellemann.svci.dto.json.System
import biz.nellemann.svci.dto.json.NodeStat
import com.fasterxml.jackson.databind.ObjectMapper
import spock.lang.Specification
import java.nio.file.Path
import java.nio.file.Paths
class DeserializationTest extends Specification {
ObjectMapper mapper
def setup() {
mapper = new ObjectMapper();
}
def cleanup() {
}
void "lssystem"() {
when:
Path testConfigurationFile = Paths.get(getClass().getResource('/lssystem.json').toURI())
System system = mapper.readerFor(System.class).readValue(testConfigurationFile.toFile())
then:
system.name == "V7000_A2U12"
system.location == "local"
system.codeLevel == "8.4.2.0 (build 154.20.2109031944000)"
system.productName == "IBM Storwize V7000"
}
void "lsnodestat"() {
when:
Path testConfigurationFile = Paths.get(getClass().getResource('/lsnodestats.json').toURI())
List<NodeStat> nodeStats = Arrays.asList(mapper.readerFor(NodeStat[].class).readValue(testConfigurationFile.toFile()))
then:
nodeStats.size() == 92
nodeStats.get(0).nodeName == "node1"
nodeStats.get(0).statName == "compression_cpu_pc"
nodeStats.get(0).statCurrent == 0
}
void "lsenclosurestats"() {
when:
Path testConfigurationFile = Paths.get(getClass().getResource('/lsenclosurestats.json').toURI())
List<EnclosureStat> enclosureStats = Arrays.asList(mapper.readerFor(EnclosureStat[].class).readValue(testConfigurationFile.toFile()))
then:
enclosureStats.size() == 6
enclosureStats.get(0).enclosureId == "1"
enclosureStats.get(0).statName == "power_w"
enclosureStats.get(0).statCurrent == 332
enclosureStats.get(0).statPeak == 333
enclosureStats.get(0).statPeakTime == 221126132328
}
}

View file

@ -0,0 +1,22 @@
package biz.nellemann.svci
import biz.nellemann.svci.dto.toml.InfluxConfiguration
import spock.lang.Ignore
import spock.lang.Specification
@Ignore
class InfluxClientTest extends Specification {
InfluxClient influxClient
def setup() {
influxClient = new InfluxClient(new InfluxConfiguration("http://localhost:8086", "root", "", "svci"))
influxClient.login()
}
def cleanup() {
influxClient.logoff()
}
}

View file

@ -0,0 +1,102 @@
package biz.nellemann.svci
import org.mockserver.integration.ClientAndServer
import org.mockserver.model.Header
import org.mockserver.model.HttpRequest
import org.mockserver.model.HttpResponse
import org.mockserver.model.MediaType
class MockResponses {
static void prepareClientResponseForLogin(ClientAndServer mockServer) {
File responseFile = new File("src/test/resources/hmc-logon-response.xml")
//def responseFile = new File(getClass().getResource('/hmc-logon-response.xml').toURI())
def req = HttpRequest.request()
.withMethod("PUT")
.withPath("/rest/api/web/Logon")
def res = HttpResponse.response()
.withStatusCode(200)
.withHeaders(
new Header("Content-Type", "application/vnd.ibm.powervm.web+xml; type=LogonResponse"),
)
.withBody(responseFile.getText('UTF-8'), MediaType.XML_UTF_8)
mockServer.when(req).respond(res)
}
static void prepareClientResponseForManagementConsole(ClientAndServer mockServer) {
File responseFile = new File("src/test/resources/1-hmc.xml")
//def responseFile = new File(getClass().getResource('/1-hmc.xml').toURI())
def req = HttpRequest.request()
.withMethod("GET")
.withPath("/rest/api/uom/ManagementConsole")
def res = HttpResponse.response()
.withStatusCode(200)
.withHeaders(
new Header("Content-Type", "application/atom+xml; charset=UTF-8"),
)
.withBody(responseFile.getText('UTF-8'), MediaType.XML_UTF_8)
mockServer.when(req).respond(res)
}
static void prepareClientResponseForManagedSystem(ClientAndServer mockServer) {
File responseFile = new File("src/test/resources/2-managed-system.xml")
//def responseFile = new File(getClass().getResource('/2-managed-system.xml').toURI())
def req = HttpRequest.request()
.withMethod("GET")
.withPath("/rest/api/uom/ManagementConsole/[0-9a-z-]+/ManagedSystem/.*")
def res = HttpResponse.response()
.withStatusCode(200)
.withHeaders(
new Header("Content-Type", "application/atom+xml; charset=UTF-8"),
)
.withBody(responseFile.getText('UTF-8'), MediaType.XML_UTF_8)
mockServer.when(req).respond(res)
}
static void prepareClientResponseForLogicalPartition(ClientAndServer mockServer) {
File responseFile = new File("src/test/resources/3-lpar.xml")
//def responseFile = new File(getClass().getResource('/3-lpar.xml').toURI())
def req = HttpRequest.request()
.withMethod("GET")
.withPath("/rest/api/uom/ManagedSystem/[0-9a-z-]+/LogicalPartition/.*")
def res = HttpResponse.response()
.withStatusCode(200)
.withHeaders(
new Header("Content-Type", "application/atom+xml; charset=UTF-8"),
)
.withBody(responseFile.getText('UTF-8'), MediaType.XML_UTF_8)
mockServer.when(req).respond(res)
}
static void prepareClientResponseForVirtualIOServer(ClientAndServer mockServer) {
File responseFile = new File("src/test/resources/2-vios.xml")
//def responseFile = new File(getClass().getResource('/2-vios.xml').toURI())
def req = HttpRequest.request()
.withMethod("GET")
.withPath("/rest/api/uom/ManagedSystem/[0-9a-z-]+/VirtualIOServer/.*")
def res = HttpResponse.response()
.withStatusCode(200)
.withHeaders(
new Header("Content-Type", "application/atom+xml; charset=UTF-8"),
)
.withBody(responseFile.getText('UTF-8'), MediaType.XML_UTF_8)
mockServer.when(req).respond(res)
}
}

View file

@ -0,0 +1,95 @@
package biz.nellemann.svci;
import org.mockserver.integration.ClientAndServer
import org.mockserver.logging.MockServerLogger
import org.mockserver.model.Header
import org.mockserver.model.HttpRequest
import org.mockserver.model.HttpResponse
import org.mockserver.model.MediaType
import org.mockserver.socket.PortFactory
import org.mockserver.socket.tls.KeyStoreFactory
import spock.lang.Shared
import spock.lang.Specification
import spock.lang.Stepwise
import javax.net.ssl.HttpsURLConnection
import java.util.concurrent.TimeUnit
@Stepwise
class RestClientTest extends Specification {
@Shared
private static ClientAndServer mockServer;
@Shared
private RestClient serviceClient
def setupSpec() {
HttpsURLConnection.setDefaultSSLSocketFactory(new KeyStoreFactory(new MockServerLogger()).sslContext().getSocketFactory());
mockServer = ClientAndServer.startClientAndServer(PortFactory.findFreePort());
serviceClient = new RestClient(String.format("http://localhost:%d", mockServer.getPort()), "superuser", "password", true)
}
def cleanupSpec() {
mockServer.stop()
}
def setup() {
mockServer.reset()
}
def "Test POST Request"() {
setup:
def req = HttpRequest.request()
.withMethod("POST")
.withPath("/test/post")
def res = HttpResponse.response()
.withDelay(TimeUnit.SECONDS, 1)
.withStatusCode(202)
.withHeaders(
new Header("Content-Type", "text/plain; charset=UTF-8"),
)
.withBody("Created, OK.", MediaType.TEXT_PLAIN)
mockServer.when(req).respond(res)
when:
String response = serviceClient.postRequest("/test/post", null)
then:
response == "Created, OK."
}
def "Test SVC Login"() {
setup:
def responseFile = new File(getClass().getResource('/svc-auth-response.json').toURI())
def req = HttpRequest.request()
.withHeader("X-Auth-Username", "superuser")
.withHeader("X-Auth-Password", "password")
.withMethod("POST")
.withPath("/rest/v1/auth")
def res = HttpResponse.response()
.withDelay(TimeUnit.SECONDS, 1)
.withStatusCode(200)
.withHeaders(
new Header("Content-Type", "application/json"),
)
.withBody(responseFile.getText(), MediaType.APPLICATION_JSON)
mockServer.when(req).respond(res)
when:
serviceClient.login()
then:
serviceClient.authToken == "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpYXQiOjE2Njk2MjY3MTMsImV4cCI6MTY2OTYzMDMxMywianRpIjoiN2UxYWJiZmJmNzlkMWE3YTVlNGI1MjM1M2VlZmM0ZDkiLCJzdiI6eyJ1c2VyIjoic3VwZXJ1c2VyIn19.B8MVI5XvmKi-ONX1NTaDmcMEB6SVd93kfW8beKu3Mfl70tGwCotY5-lQ3R4sZWd4hiEqvsrrCm3o1afUGlCxJw"
}
}

View file

@ -0,0 +1,112 @@
package biz.nellemann.svci
import org.mockserver.integration.ClientAndServer
import org.mockserver.logging.MockServerLogger
import org.mockserver.socket.PortFactory
import org.mockserver.socket.tls.KeyStoreFactory
import spock.lang.Ignore
import spock.lang.Shared
import spock.lang.Specification
import javax.net.ssl.HttpsURLConnection
@Ignore
class VolumeControllerTest extends Specification {
@Shared
private static ClientAndServer mockServer;
@Shared
private RestClient serviceClient
@Shared
private VolumeController volumeController
@Shared
private File metricsFile
def setupSpec() {
HttpsURLConnection.setDefaultSSLSocketFactory(new KeyStoreFactory(new MockServerLogger()).sslContext().getSocketFactory());
mockServer = ClientAndServer.startClientAndServer(PortFactory.findFreePort());
serviceClient = new RestClient(String.format("http://localhost:%d", mockServer.getPort()), "user", "password", false)
MockResponses.prepareClientResponseForLogin(mockServer)
//MockResponses.prepareClientResponseForManagementConsole(mockServer)
//MockResponses.prepareClientResponseForManagedSystem(mockServer)
//MockResponses.prepareClientResponseForVirtualIOServer(mockServer)
//MockResponses.prepareClientResponseForLogicalPartition(mockServer)
serviceClient.login()
volumeController = new VolumeController(serviceClient, );
volumeController.discover()
}
def cleanupSpec() {
mockServer.stop()
}
def setup() {
}
def "test we got entry"() {
expect:
volumeController.entry.getName() == "Server-9009-42A-SN21F64EV"
}
void "test getDetails"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getDetails()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().tags['servername'] == 'Server-9009-42A-SN21F64EV'
listOfMeasurements.first().fields['utilizedProcUnits'] == 0.00458
listOfMeasurements.first().fields['assignedMem'] == 40448.0
}
void "test getMemoryMetrics"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getMemoryMetrics()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().fields['totalMem'] == 1048576.000
}
void "test getProcessorMetrics"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getProcessorMetrics()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().fields['availableProcUnits'] == 4.65
}
void "test getSystemSharedProcessorPools"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getSharedProcessorPools()
then:
listOfMeasurements.size() == 4
listOfMeasurements.first().fields['assignedProcUnits'] == 22.00013
}
void "test getPhysicalProcessorPool"() {
when:
volumeController.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = volumeController.getPhysicalProcessorPool()
then:
listOfMeasurements.size() == 1
listOfMeasurements.first().fields['assignedProcUnits'] == 22.0
}
}

View file

@ -0,0 +1,44 @@
[
{
"enclosure_id": "1",
"stat_name": "power_w",
"stat_current": "332",
"stat_peak": "333",
"stat_peak_time": "221126132328"
},
{
"enclosure_id": "1",
"stat_name": "temp_c",
"stat_current": "26",
"stat_peak": "26",
"stat_peak_time": "221126132358"
},
{
"enclosure_id": "1",
"stat_name": "temp_f",
"stat_current": "78",
"stat_peak": "78",
"stat_peak_time": "221126132358"
},
{
"enclosure_id": "2",
"stat_name": "power_w",
"stat_current": "371",
"stat_peak": "371",
"stat_peak_time": "221126132358"
},
{
"enclosure_id": "2",
"stat_name": "temp_c",
"stat_current": "28",
"stat_peak": "28",
"stat_peak_time": "221126132358"
},
{
"enclosure_id": "2",
"stat_name": "temp_f",
"stat_current": "82",
"stat_peak": "82",
"stat_peak_time": "221126132358"
}
]

View file

@ -0,0 +1,738 @@
[
{
"node_id": "1",
"node_name": "node1",
"stat_name": "compression_cpu_pc",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "cpu_pc",
"stat_current": "1",
"stat_peak": "1",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "fc_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "fc_io",
"stat_current": "11",
"stat_peak": "40",
"stat_peak_time": "221126132024"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "sas_mb",
"stat_current": "28",
"stat_peak": "75",
"stat_peak_time": "221126131839"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "sas_io",
"stat_current": "115",
"stat_peak": "300",
"stat_peak_time": "221126131839"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "iscsi_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "iscsi_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "write_cache_pc",
"stat_current": "34",
"stat_peak": "34",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "total_cache_pc",
"stat_current": "79",
"stat_peak": "79",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_io",
"stat_current": "4",
"stat_peak": "32",
"stat_peak_time": "221126132024"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_mb",
"stat_current": "28",
"stat_peak": "75",
"stat_peak_time": "221126131839"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_io",
"stat_current": "115",
"stat_peak": "300",
"stat_peak_time": "221126131839"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_ms",
"stat_current": "2",
"stat_peak": "8",
"stat_peak_time": "221126132024"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_r_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_r_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_r_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_w_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_w_io",
"stat_current": "4",
"stat_peak": "32",
"stat_peak_time": "221126132024"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "vdisk_w_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_r_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_r_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_r_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_w_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_w_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "mdisk_w_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_r_mb",
"stat_current": "28",
"stat_peak": "75",
"stat_peak_time": "221126131839"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_r_io",
"stat_current": "115",
"stat_peak": "300",
"stat_peak_time": "221126131839"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_r_ms",
"stat_current": "2",
"stat_peak": "8",
"stat_peak_time": "221126132024"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_w_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_w_io",
"stat_current": "0",
"stat_peak": "14",
"stat_peak_time": "221126132024"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "drive_w_ms",
"stat_current": "0",
"stat_peak": "7",
"stat_peak_time": "221126132024"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "iplink_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "iplink_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "iplink_comp_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "cloud_up_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "cloud_up_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "cloud_down_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "cloud_down_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "iser_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "1",
"node_name": "node1",
"stat_name": "iser_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132039"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "compression_cpu_pc",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "cpu_pc",
"stat_current": "1",
"stat_peak": "2",
"stat_peak_time": "221126132003"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "fc_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "fc_io",
"stat_current": "20",
"stat_peak": "39",
"stat_peak_time": "221126132023"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "sas_mb",
"stat_current": "74",
"stat_peak": "372",
"stat_peak_time": "221126131758"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "sas_io",
"stat_current": "297",
"stat_peak": "1484",
"stat_peak_time": "221126131758"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "iscsi_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "iscsi_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "write_cache_pc",
"stat_current": "34",
"stat_peak": "34",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "total_cache_pc",
"stat_current": "79",
"stat_peak": "79",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_io",
"stat_current": "12",
"stat_peak": "31",
"stat_peak_time": "221126132023"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_ms",
"stat_current": "0",
"stat_peak": "2",
"stat_peak_time": "221126132023"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_ms",
"stat_current": "0",
"stat_peak": "82",
"stat_peak_time": "221126132023"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_mb",
"stat_current": "74",
"stat_peak": "372",
"stat_peak_time": "221126131758"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_io",
"stat_current": "297",
"stat_peak": "1484",
"stat_peak_time": "221126131758"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_ms",
"stat_current": "3",
"stat_peak": "8",
"stat_peak_time": "221126131713"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_r_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_r_io",
"stat_current": "0",
"stat_peak": "5",
"stat_peak_time": "221126132013"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_r_ms",
"stat_current": "0",
"stat_peak": "66",
"stat_peak_time": "221126132023"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_w_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_w_io",
"stat_current": "12",
"stat_peak": "30",
"stat_peak_time": "221126132023"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "vdisk_w_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_r_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_r_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_r_ms",
"stat_current": "0",
"stat_peak": "82",
"stat_peak_time": "221126132023"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_w_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_w_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "mdisk_w_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_r_mb",
"stat_current": "74",
"stat_peak": "372",
"stat_peak_time": "221126131758"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_r_io",
"stat_current": "297",
"stat_peak": "1484",
"stat_peak_time": "221126131758"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_r_ms",
"stat_current": "3",
"stat_peak": "8",
"stat_peak_time": "221126131713"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_w_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_w_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "drive_w_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "iplink_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "iplink_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "iplink_comp_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "cloud_up_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "cloud_up_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "cloud_down_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "cloud_down_ms",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "iser_mb",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
},
{
"node_id": "2",
"node_name": "node2",
"stat_name": "iser_io",
"stat_current": "0",
"stat_peak": "0",
"stat_peak_time": "221126132038"
}
]

View file

@ -0,0 +1,131 @@
{
"id": "000001002100613E",
"name": "V7000_A2U12",
"location": "local",
"partnership": "",
"total_mdisk_capacity": "60.9TB",
"space_in_mdisk_grps": "60.9TB",
"space_allocated_to_vdisks": "2.87TB",
"total_free_space": "58.0TB",
"total_vdiskcopy_capacity": "20.42TB",
"total_used_capacity": "2.60TB",
"total_overallocation": "33",
"total_vdisk_capacity": "20.42TB",
"total_allocated_extent_capacity": "2.92TB",
"statistics_status": "on",
"statistics_frequency": "5",
"cluster_locale": "en_US",
"time_zone": "13 Africa/Casablanca",
"code_level": "8.4.2.0 (build 154.20.2109031944000)",
"console_IP": "10.32.64.182:443",
"id_alias": "000001002100613E",
"gm_link_tolerance": "300",
"gm_inter_cluster_delay_simulation": "0",
"gm_intra_cluster_delay_simulation": "0",
"gm_max_host_delay": "5",
"email_reply": "",
"email_contact": "",
"email_contact_primary": "",
"email_contact_alternate": "",
"email_contact_location": "",
"email_contact2": "",
"email_contact2_primary": "",
"email_contact2_alternate": "",
"email_state": "stopped",
"inventory_mail_interval": "0",
"cluster_ntp_IP_address": "",
"cluster_isns_IP_address": "",
"iscsi_auth_method": "none",
"iscsi_chap_secret": "",
"auth_service_configured": "no",
"auth_service_enabled": "no",
"auth_service_url": "",
"auth_service_user_name": "",
"auth_service_pwd_set": "no",
"auth_service_cert_set": "no",
"auth_service_type": "ldap",
"relationship_bandwidth_limit": "25",
"tiers": [
{
"tier": "tier_scm",
"tier_capacity": "0.00MB",
"tier_free_capacity": "0.00MB"
},
{
"tier": "tier0_flash",
"tier_capacity": "0.00MB",
"tier_free_capacity": "0.00MB"
},
{
"tier": "tier1_flash",
"tier_capacity": "49.17TB",
"tier_free_capacity": "46.25TB"
},
{
"tier": "tier_enterprise",
"tier_capacity": "11.74TB",
"tier_free_capacity": "11.74TB"
},
{
"tier": "tier_nearline",
"tier_capacity": "0.00MB",
"tier_free_capacity": "0.00MB"
}
],
"easy_tier_acceleration": "off",
"has_nas_key": "no",
"layer": "storage",
"rc_buffer_size": "256",
"compression_active": "no",
"compression_virtual_capacity": "0.00MB",
"compression_compressed_capacity": "0.00MB",
"compression_uncompressed_capacity": "0.00MB",
"cache_prefetch": "on",
"email_organization": "",
"email_machine_address": "",
"email_machine_city": "",
"email_machine_state": "XX",
"email_machine_zip": "",
"email_machine_country": "",
"total_drive_raw_capacity": "79.25TB",
"compression_destage_mode": "off",
"local_fc_port_mask": "1111111111111111111111111111111111111111111111111111111111111111",
"partner_fc_port_mask": "1111111111111111111111111111111111111111111111111111111111111111",
"high_temp_mode": "off",
"topology": "standard",
"topology_status": "",
"rc_auth_method": "none",
"vdisk_protection_time": "15",
"vdisk_protection_enabled": "yes",
"product_name": "IBM Storwize V7000",
"odx": "off",
"max_replication_delay": "0",
"partnership_exclusion_threshold": "315",
"gen1_compatibility_mode_enabled": "no",
"ibm_customer": "",
"ibm_component": "",
"ibm_country": "",
"tier_scm_compressed_data_used": "0.00MB",
"tier0_flash_compressed_data_used": "0.00MB",
"tier1_flash_compressed_data_used": "0.00MB",
"tier_enterprise_compressed_data_used": "0.00MB",
"tier_nearline_compressed_data_used": "0.00MB",
"total_reclaimable_capacity": "0.00MB",
"physical_capacity": "60.91TB",
"physical_free_capacity": "58.00TB",
"used_capacity_before_reduction": "0.00MB",
"used_capacity_after_reduction": "0.00MB",
"overhead_capacity": "0.00MB",
"deduplication_capacity_saving": "0.00MB",
"enhanced_callhome": "on",
"censor_callhome": "off",
"host_unmap": "off",
"backend_unmap": "on",
"quorum_mode": "standard",
"quorum_site_id": "",
"quorum_site_name": "",
"quorum_lease": "short",
"automatic_vdisk_analysis_enabled": "on",
"callhome_accepted_usage": "no",
"safeguarded_copy_suspended": "no"
}

View file

@ -0,0 +1 @@
{"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpYXQiOjE2Njk2MjY3MTMsImV4cCI6MTY2OTYzMDMxMywianRpIjoiN2UxYWJiZmJmNzlkMWE3YTVlNGI1MjM1M2VlZmM0ZDkiLCJzdiI6eyJ1c2VyIjoic3VwZXJ1c2VyIn19.B8MVI5XvmKi-ONX1NTaDmcMEB6SVd93kfW8beKu3Mfl70tGwCotY5-lQ3R4sZWd4hiEqvsrrCm3o1afUGlCxJw"}

View file

@ -0,0 +1,18 @@
# SVCi Configuration
# InfluxDB to save metrics
[influx]
url = "http://localhost:8086"
username = "root"
password = ""
database = "svci"
# SVC on our primary site
[svc.site1]
url = "https://10.10.10.18:7443"
username = "superuser"
password = "password"
refresh = 29
discover = 59
trust = true # Ignore SSL cert. errors