Compare commits
22 Commits
Author | SHA1 | Date |
---|---|---|
Mark Nellemann | 706f0c7038 | |
Mark Nellemann | 2c1921564b | |
Mark Nellemann | a24b03f4ad | |
Mark Nellemann | d59079e6da | |
Mark Nellemann | bdfa535b75 | |
Mark Nellemann | 41decccc82 | |
Mark Nellemann | 24d1701ab3 | |
Mark Nellemann | 5b2a3ff9ea | |
Mark Nellemann | ec9586f870 | |
Mark Nellemann | 46fd9d7671 | |
Mark Nellemann | 8f4fbc6a93 | |
Mark Nellemann | 39af1e3c00 | |
Mark Nellemann | 6b9b78f32c | |
Mark Nellemann | 2967f6ef75 | |
Mark Nellemann | 6699566fba | |
Mark Nellemann | 55e7fe2b90 | |
Mark Nellemann | e30d290f07 | |
Mark Nellemann | f461b40321 | |
Mark Nellemann | c64bf66d9d | |
Mark Nellemann | 2e363f0a39 | |
Mark Nellemann | aa36e51367 | |
Mark Nellemann | 5952a21714 |
36
CHANGELOG.md
36
CHANGELOG.md
|
@ -2,36 +2,44 @@
|
|||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
## [1.4.1] - 2011-12-15
|
||||
## 1.4.5 - 2023-11-13
|
||||
- Adjust timeout to not have lingering sessions on HMC
|
||||
- Update 3rd party dependencies
|
||||
|
||||
## 1.4.4 - 2023-05-20
|
||||
- Support for InfluxDB v2, now requires InfluxDB 1.8 or later
|
||||
- Increase influx writer buffer limit
|
||||
- Various dashboard improvements
|
||||
|
||||
## 1.4.3 - 2023-03-21
|
||||
- Fix and improve processor utilization dashboards.
|
||||
- Minor code cleanup.
|
||||
|
||||
## 1.4.2 - 2023-01-05
|
||||
- Fix error in SR-IOV port type being null.
|
||||
|
||||
## 1.4.1 - 2022-12-15
|
||||
- Retrieve multiple PCM samples and keep track of processing.
|
||||
- Rename VIOS metric 'vFC' (storage adapter) to 'virtual'.
|
||||
|
||||
## [1.4.0] - 2011-12-01
|
||||
## 1.4.0 - 2022-12-01
|
||||
- Rewrite of toml+xml+json de-serialization code (uses jackson now).
|
||||
- Changes to configuration file format - please look at [doc/hmci.toml](doc/hmci.toml) as example.
|
||||
- Logging (write to file) JSON output from HMC is currently not possible.
|
||||
|
||||
## [1.3.3] - 2022-09-20
|
||||
## 1.3.3 - 2022-09-20
|
||||
- Default configuration location on Windows platform.
|
||||
- Process LPAR SR-IOV logical network ports data
|
||||
- Update default dashboards
|
||||
- Update documentation
|
||||
|
||||
## [1.3.0] - 2022-02-04
|
||||
## 1.3.0 - 2022-02-04
|
||||
- Correct use of InfluxDB batch writing.
|
||||
|
||||
## [1.2.8] - 2022-02-28
|
||||
## 1.2.8 - 2022-02-28
|
||||
- Sort measurement tags before writing to InfluxDB.
|
||||
- Update 3rd party dependencies.
|
||||
|
||||
|
||||
## [1.2.7] - 2022-02-24
|
||||
## 1.2.7 - 2022-02-24
|
||||
- Options to include/exclude Managed Systems and/or Logical Partitions.
|
||||
|
||||
[1.4.1]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.4.1%0Dv1.4.0
|
||||
[1.4.0]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.4.0%0Dv1.3.3
|
||||
[1.3.3]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.3.3%0Dv1.3.0
|
||||
[1.3.0]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.3.0%0Dv1.2.8
|
||||
[1.2.8]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.2.8%0Dv1.2.7
|
||||
[1.2.7]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.2.7%0Dv1.2.6
|
||||
[1.2.6]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.2.6%0Dv1.2.5
|
||||
|
|
223
README.md
223
README.md
|
@ -1,222 +1,3 @@
|
|||
# HMC Insights
|
||||
# Repository moved
|
||||
|
||||
**HMCi** is a utility that collects metrics from one or more *IBM Power Hardware Management Consoles (HMC)*, without the need to install agents on logical partitions / virtual machines running on the IBM Power systems. The metric data is processed and saved into an InfluxDB time-series database. Grafana is used to visualize the metrics data from InfluxDB through provided dashboards, or your own customized dashboards.
|
||||
|
||||
This software is free to use and is licensed under the [Apache 2.0 License](LICENSE), but is not supported or endorsed by International Business Machines (IBM).
|
||||
|
||||
Metrics includes:
|
||||
|
||||
- *Managed Systems* - the physical Power servers
|
||||
- *Logical Partitions* - the virtualized servers running AIX, Linux or IBM-i (AS/400)
|
||||
- *Virtual I/O Servers* - the i/o partition(s) virtualizing network and storage
|
||||
- *Energy* - watts and temperatures (needs to be enabled and is not available on P7 and multi-chassis systems)
|
||||
|
||||
![architecture](doc/HMCi.png)
|
||||
|
||||
Some of my other related projects are:
|
||||
|
||||
- [svci](https://git.data.coop/nellemann/svci) for monitoring IBM Spectrum Virtualize (Flashsystems / Storwize / SVC)
|
||||
- [sysmon](https://git.data.coop/nellemann/sysmon) for monitoring all types of servers with a small Java agent
|
||||
- [syslogd](https://git.data.coop/nellemann/syslogd) for redirecting syslog and GELF to remote logging destinations
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
There are few steps in the installation.
|
||||
|
||||
1. Preparations on the Hardware Management Console (HMC)
|
||||
2. Installation of InfluxDB and Grafana software
|
||||
3. Installation and configuration of *HMC Insights* (HMCi)
|
||||
4. Configure Grafana and import example dashboards
|
||||
|
||||
### 1 - IBM Power HMC Setup Instructions
|
||||
|
||||
- Login to your HMC
|
||||
- Navigate to *Console Settings*
|
||||
- Go to *Change Date and Time*
|
||||
- Set correct timezone, if not done already
|
||||
- Configure one or more NTP servers, if not done already
|
||||
- Enable the NTP client, if not done already
|
||||
- Navigate to *Users and Security*
|
||||
- Create a new read-only/viewer **hmci** user, which will be used to connect to the HMC.
|
||||
- Click *Manage User Profiles and Access*, edit the newly created *hmci* user and click *User Properties*:
|
||||
- Set *Session timeout minutes* to **60**
|
||||
- Set *Verify timeout minutes* to **15**
|
||||
- Set *Idle timeout minutes* to **90**
|
||||
- Set *Minimum time in days between password changes* to **0**
|
||||
- **Enable** *Allow remote access via the web*
|
||||
- Navigate to *HMC Management* and *Console Settings*
|
||||
- Click *Change Performance Monitoring Settings*:
|
||||
- Enable *Performance Monitoring Data Collection for Managed Servers*: **All On**
|
||||
- Set *Performance Data Storage* to **1** day or preferable more
|
||||
|
||||
If you do not enable *Performance Monitoring Data Collection for Managed Servers*, you will see errors such as *Unexpected response: 403*. Use the HMCi debug option to get more details about what is going on.
|
||||
|
||||
### 2 - InfluxDB and Grafana Installation
|
||||
|
||||
Install InfluxDB (v. **1.8.x** or **1.9.x** for best compatibility with Grafana) on a host which is network accessible by the HMCi utility (the default InfluxDB port is 8086). You can install Grafana on the same server or any server which are able to connect to the InfluxDB database. The Grafana installation needs to be accessible from your browser (default on port 3000). The default settings for both InfluxDB and Grafana will work fine as a start.
|
||||
|
||||
- You can download [Grafana ppc64le](https://www.power-devops.com/grafana) and [InfluxDB ppc64le](https://www.power-devops.com/influxdb) packages for most Linux distributions and AIX on the [Power DevOps](https://www.power-devops.com/) site.
|
||||
- Binaries for amd64/x86 are available from the [Grafana website](https://grafana.com/grafana/download) (select the **OSS variant**) and [InfluxDB website](https://portal.influxdata.com/downloads/) and most likely directly from your Linux distributions repositories.
|
||||
- Create the empty *hmci* database by running the **influx** CLI command and type:
|
||||
|
||||
```text
|
||||
CREATE DATABASE "hmci" WITH DURATION 365d REPLICATION 1;
|
||||
```
|
||||
|
||||
See the [Influx documentation](https://docs.influxdata.com/influxdb/v1.8/query_language/manage-database/#create-database) for more information on duration and replication.
|
||||
|
||||
### 3 - HMCi Installation & Configuration
|
||||
|
||||
Install *HMCi* on a host, that can connect to your Power HMC (on port 12443), and is also allowed to connect to the InfluxDB service. This *can be* the same LPAR/VM as used for the InfluxDB installation.
|
||||
|
||||
- Ensure you have **correct date/time** and NTPd running to keep it accurate!
|
||||
- The only requirement for **hmci** is the Java runtime, version 8 (or later)
|
||||
- Install **HMCi** from [packages](https://git.data.coop/nellemann/-/packages/generic/hmci/) (rpm, deb or jar) or build from source
|
||||
- On RPM based systems: ```sudo rpm -ivh hmci-x.y.z-n.noarch.rpm```
|
||||
- On DEB based systems: ```sudo dpkg -i hmci_x.y.z-n_all.deb```
|
||||
- Copy the **/opt/hmci/doc/hmci.toml** configuration example into **/etc/hmci.toml** and edit the configuration to suit your environment. The location of the configuration file can optionally be changed with the *--conf* option.
|
||||
- Run the **/opt/hmci/bin/hmci** program in a shell, as a @reboot cron task or configure as a proper service - there are instructions in the [doc/readme-service.md](doc/readme-service.md) file.
|
||||
- When started, *hmci* expects the InfluxDB database to exist already.
|
||||
|
||||
### 4 - Grafana Configuration
|
||||
|
||||
- Configure Grafana to use InfluxDB as a new datasource
|
||||
- **NOTE:** set *Min time interval* to *30s* or *1m* depending on your HMCi *refresh* setting.
|
||||
- Import example dashboards from [doc/dashboards/*.json](doc/dashboards/) into Grafana as a starting point and get creative making your own cool dashboards - please share anything useful :)
|
||||
|
||||
## Notes
|
||||
|
||||
### No data (or past/future data) shown in Grafana
|
||||
|
||||
This is most likely due to timezone, date and/or NTP not being configured correctly on the HMC and/or host running HMCi.
|
||||
|
||||
Example showing how you configure related settings through the HMC CLI:
|
||||
```shell
|
||||
chhmc -c date -s modify --datetime MMDDhhmm # Set current date/time: MMDDhhmm[[CC]YY][.ss]
|
||||
chhmc -c date -s modify --timezone Europe/Copenhagen # Configure your timezone
|
||||
chhmc -c xntp -s enable # Enable the NTP service
|
||||
chhmc -c xntp -s add -a IP_Addr # Add a remote NTP server
|
||||
```
|
||||
Remember to reboot your HMC after changing the timezone.
|
||||
|
||||
### Compatibility with nextract Plus
|
||||
|
||||
From version 1.2 *HMCi* is made compatible with the similar [nextract Plus](https://www.ibm.com/support/pages/nextract-plus-hmc-rest-api-performance-statistics) tool from Nigel Griffiths. This means that the Grafana [dashboards](https://grafana.com/grafana/dashboards/13819) made by Nigel are compatible with *HMCi* and the other way around.
|
||||
|
||||
### Start InfluxDB and Grafana at boot (systemd compatible Linux)
|
||||
|
||||
```shell
|
||||
systemctl enable influxdb
|
||||
systemctl start influxdb
|
||||
|
||||
systemctl enable grafana-server
|
||||
systemctl start grafana-server
|
||||
```
|
||||
|
||||
### InfluxDB Retention Policy
|
||||
|
||||
Examples for changing the default InfluxDB retention policy for the hmci database:
|
||||
|
||||
```text
|
||||
ALTER RETENTION POLICY "autogen" ON "hmci" DURATION 156w
|
||||
ALTER RETENTION POLICY "autogen" ON "hmci" DURATION 90d
|
||||
```
|
||||
|
||||
### Upgrading HMCi
|
||||
|
||||
On RPM based systems (RedHat, Suse, CentOS), download the latest *hmci-x.y.z-n.noarch.rpm* file and upgrade:
|
||||
```shell
|
||||
sudo rpm -Uvh hmci-x.y.z-n.noarch.rpm
|
||||
```
|
||||
|
||||
On DEB based systems (Debian, Ubuntu and derivatives), download the latest *hmci_x.y.z-n_all.deb* file and upgrade:
|
||||
```shell
|
||||
sudo dpkg -i hmci_x.y.z-n_all.deb
|
||||
```
|
||||
|
||||
Restart the HMCi service on *systemd* based Linux systems:
|
||||
|
||||
```shell
|
||||
systemctl restart hmci
|
||||
journalctl -f -u hmci # to check log output
|
||||
```
|
||||
|
||||
|
||||
### AIX Notes
|
||||
|
||||
To install (or upgrade) on AIX, you need to pass the *--ignoreos* flag to the *rpm* command:
|
||||
|
||||
```shell
|
||||
rpm -Uvh --ignoreos hmci-x.y.z-n.noarch.rpm
|
||||
```
|
||||
|
||||
|
||||
## Dashboard Screenshots
|
||||
|
||||
Screenshots of some of the provided Grafana dashboards can be found in the [doc/screenshots/](doc/screenshots) folder.
|
||||
|
||||
|
||||
## Known problems
|
||||
|
||||
### Incomplete set of metrics
|
||||
|
||||
I have not been able to test and verify all types of metric data. If you encounter any missing or wrong data, please [contact me](mark.nellemann@gmail.com) and I will try to fix it.
|
||||
|
||||
It is possible to save the raw JSON data received from the HCM, which can help me implement missing data. You need to specify **trace = "/tmp/hmci-trace"** or some other location, in the configuration file under the HMC instance.
|
||||
|
||||
|
||||
### Naming collision
|
||||
|
||||
You can't have partitions (or Virtual I/O Servers) on different Systems with the same name, as these cannot be distinguished when metrics are
|
||||
written to InfluxDB (which uses the name as key).
|
||||
|
||||
### Renaming partitions
|
||||
|
||||
If you rename a partition, the metrics in InfluxDB will still be available by the old name, and new metrics will be available by the new name of the partition. There is no easy way to migrate the old data, but you can delete it easily:
|
||||
|
||||
```text
|
||||
DELETE WHERE lparname = 'name';
|
||||
```
|
||||
|
||||
## Development Information
|
||||
|
||||
You need Java (JDK) version 8 or later to build hmci.
|
||||
|
||||
|
||||
### Build & Test
|
||||
|
||||
Use the gradle build tool, which will download all required dependencies:
|
||||
|
||||
```shell
|
||||
./gradlew clean build
|
||||
```
|
||||
|
||||
### Local Testing
|
||||
|
||||
#### InfluxDB
|
||||
|
||||
Start the InfluxDB container:
|
||||
|
||||
```shell
|
||||
docker run --name=influxdb --rm -d -p 8086:8086 influxdb:1.8
|
||||
```
|
||||
|
||||
Create the *hmci* database:
|
||||
|
||||
```shell
|
||||
docker exec -i influxdb influx -execute "CREATE DATABASE hmci"
|
||||
```
|
||||
|
||||
|
||||
#### Grafana
|
||||
|
||||
Start the Grafana container, linking it to the InfluxDB container:
|
||||
|
||||
```shell
|
||||
docker run --name grafana --link influxdb:influxdb --rm -d -p 3000:3000 grafana/grafana
|
||||
```
|
||||
|
||||
Setup Grafana to connect to the InfluxDB container by defining a new datasource on URL *http://influxdb:8086* named *hmci*.
|
||||
|
||||
|
||||
Grafana dashboards can be imported from the *doc/* folder.
|
||||
Please visit [github.com/mnellemann/hmci](https://github.com/mnellemann/hmci)
|
8
TODO.md
8
TODO.md
|
@ -1,8 +0,0 @@
|
|||
# TODO
|
||||
|
||||
In *ManagementConsole run()* - should we try to sleep up until the closest 30/sec interval to get most fresh data?
|
||||
Or should we get more data-samples and keep track of which we have processed already ? And then sleep for shorter times.
|
||||
|
||||
Set how many samples to ask for and process.
|
||||
Loop samples.
|
||||
Keep track of sample status and if they are processed.
|
30
build.gradle
30
build.gradle
|
@ -1,13 +1,10 @@
|
|||
plugins {
|
||||
id 'java'
|
||||
id 'jacoco'
|
||||
id 'groovy'
|
||||
id 'application'
|
||||
|
||||
// Code coverage of tests
|
||||
id 'jacoco'
|
||||
|
||||
id "net.nemerosa.versioning" version "2.15.1"
|
||||
id "com.netflix.nebula.ospackage" version "10.0.0"
|
||||
id "com.netflix.nebula.ospackage" version "11.5.0"
|
||||
id "com.github.johnrengelman.shadow" version "7.1.2"
|
||||
}
|
||||
|
||||
|
@ -20,19 +17,18 @@ group = projectGroup
|
|||
version = projectVersion
|
||||
|
||||
dependencies {
|
||||
annotationProcessor 'info.picocli:picocli-codegen:4.7.0'
|
||||
implementation 'info.picocli:picocli:4.7.0'
|
||||
implementation 'org.influxdb:influxdb-java:2.23'
|
||||
//implementation 'com.influxdb:influxdb-client-java:6.7.0'
|
||||
implementation 'org.slf4j:slf4j-api:2.0.6'
|
||||
implementation 'org.slf4j:slf4j-simple:2.0.6'
|
||||
implementation 'com.squareup.okhttp3:okhttp:4.10.0' // Also used by InfluxDB Client
|
||||
implementation 'com.fasterxml.jackson.core:jackson-databind:2.14.1'
|
||||
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.14.1'
|
||||
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-toml:2.14.1'
|
||||
annotationProcessor 'info.picocli:picocli-codegen:4.7.5'
|
||||
implementation 'info.picocli:picocli:4.7.5'
|
||||
implementation 'org.slf4j:slf4j-api:2.0.9'
|
||||
implementation 'org.slf4j:slf4j-simple:2.0.9'
|
||||
implementation 'com.squareup.okhttp3:okhttp:4.11.0' // Also used by InfluxDB Client
|
||||
implementation 'com.influxdb:influxdb-client-java:6.10.0'
|
||||
implementation 'com.fasterxml.jackson.core:jackson-databind:2.15.2'
|
||||
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.15.2'
|
||||
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-toml:2.15.2'
|
||||
|
||||
testImplementation 'junit:junit:4.13.2'
|
||||
testImplementation 'org.spockframework:spock-core:2.3-groovy-3.0'
|
||||
testImplementation 'org.spockframework:spock-core:2.3-groovy-4.0'
|
||||
testImplementation "org.mock-server:mockserver-netty-no-dependencies:5.14.0"
|
||||
}
|
||||
|
||||
|
@ -87,7 +83,7 @@ buildDeb {
|
|||
}
|
||||
|
||||
jacoco {
|
||||
toolVersion = "0.8.8"
|
||||
toolVersion = "0.8.9"
|
||||
}
|
||||
|
||||
jacocoTestReport {
|
||||
|
|
File diff suppressed because one or more lines are too long
BIN
doc/HMCi.png
BIN
doc/HMCi.png
Binary file not shown.
Before Width: | Height: | Size: 109 KiB After Width: | Height: | Size: 163 KiB |
|
@ -71,7 +71,7 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"description": "https://bitbucket.org/mnellemann/hmci/",
|
||||
"description": "https://git.data.coop/nellemann/hmci/ - Metrics from IBM Power Systems",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": 1510,
|
||||
|
@ -93,7 +93,7 @@
|
|||
},
|
||||
"id": 37,
|
||||
"options": {
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information: [bitbucket.org/mnellemann/hmci](https://bitbucket.org/mnellemann/hmci)\n ",
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information visit: [git.data.coop/nellemann/hmci](https://git.data.coop/nellemann/hmci)\n ",
|
||||
"mode": "markdown"
|
||||
},
|
||||
"pluginVersion": "9.1.6",
|
||||
|
@ -390,7 +390,7 @@
|
|||
"measurement": "lpar_details",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT last(\"weight\") AS \"Weight\", last(\"mode\") AS \"Mode\", last(\"entitledProcUnits\") AS \"eCPU\", mean(\"utilizedProcUnits\") / mean(\"entitledProcUnits\")*100 AS \"Utilization eCPU\", last(\"currentVirtualProcessors\") AS \"vCPU\", mean(\"utilizedProcUnits\") / mean(\"maxProcUnits\") * 100 AS \"Utilization vCPU\" FROM \"lpar_processor\" WHERE (\"servername\" =~ /^$ServerName$/) AND (\"lparname\" =~ /^$LPAR$/) AND $timeFilter GROUP BY \"lparname\" fill(previous)",
|
||||
"query": "SELECT last(\"weight\") AS \"Weight\", last(\"mode\") AS \"Mode\", last(\"entitledProcUnits\") AS \"eCPU\", mean(\"utilizedProcUnits\") / mean(\"entitledProcUnits\")*100 AS \"Utilization eCPU\", last(\"currentVirtualProcessors\") AS \"vCPU\", mean(\"utilizedProcUnits\") / mean(\"currentVirtualProcessors\") * 100 AS \"Utilization vCPU\" FROM \"lpar_processor\" WHERE (\"servername\" =~ /^$ServerName$/) AND (\"lparname\" =~ /^$LPAR$/) AND $timeFilter GROUP BY \"lparname\" fill(previous)",
|
||||
"queryType": "randomWalk",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
|
@ -534,6 +534,138 @@
|
|||
"type": "influxdb",
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 3,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineStyle": {
|
||||
"fill": "solid"
|
||||
},
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "line"
|
||||
}
|
||||
},
|
||||
"decimals": 2,
|
||||
"links": [],
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 11,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 12
|
||||
},
|
||||
"id": 2,
|
||||
"links": [],
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [],
|
||||
"displayMode": "list",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "multi",
|
||||
"sort": "desc"
|
||||
}
|
||||
},
|
||||
"pluginVersion": "8.1.4",
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_lparname",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$__interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"null"
|
||||
],
|
||||
"type": "fill"
|
||||
}
|
||||
],
|
||||
"hide": false,
|
||||
"measurement": "/^$ServerName$/",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT mean(\"utilizedProcUnits\") AS \"usage\" FROM \"lpar_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"lparname\" =~ /^$LPAR$/) AND $timeFilter GROUP BY time($interval), \"lparname\", \"servername\" fill(linear)",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": []
|
||||
}
|
||||
],
|
||||
"title": "Processor Units - Utilization Stacked",
|
||||
"transformations": [],
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
|
@ -596,11 +728,11 @@
|
|||
},
|
||||
"gridPos": {
|
||||
"h": 11,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 12
|
||||
},
|
||||
"id": 2,
|
||||
"id": 40,
|
||||
"links": [],
|
||||
"options": {
|
||||
"legend": {
|
||||
|
@ -640,7 +772,7 @@
|
|||
"measurement": "/^$ServerName$/",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT (mean(\"utilizedProcUnits\") / mean(\"maxProcUnits\")) * 100 AS \"usage\" FROM \"lpar_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"lparname\" =~ /^$LPAR$/) AND $timeFilter GROUP BY time($interval), \"lparname\", \"servername\" fill(linear)",
|
||||
"query": "SELECT (mean(\"utilizedProcUnits\") / mean(\"entitledProcUnits\")) * 100 AS \"usage\" FROM \"lpar_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"lparname\" =~ /^$LPAR$/) AND $timeFilter GROUP BY time($interval), \"lparname\", \"servername\" fill(linear)",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
|
@ -661,7 +793,7 @@
|
|||
"tags": []
|
||||
}
|
||||
],
|
||||
"title": "Processor Units - Utilization Percentage",
|
||||
"title": "Processor Units - Utilization / Entitled",
|
||||
"transformations": [],
|
||||
"type": "timeseries"
|
||||
},
|
||||
|
@ -2509,7 +2641,7 @@
|
|||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "percent"
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
|
@ -2521,10 +2653,6 @@
|
|||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -2616,11 +2744,11 @@
|
|||
]
|
||||
}
|
||||
],
|
||||
"title": "Memory Assigned",
|
||||
"title": "Memory Assigned - Stacked",
|
||||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"refresh": false,
|
||||
"refresh": "30s",
|
||||
"schemaVersion": 37,
|
||||
"style": "dark",
|
||||
"tags": [
|
||||
|
@ -2659,7 +2787,7 @@
|
|||
"type": "influxdb",
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"definition": "SHOW TAG VALUES FROM \"lpar_processor\" WITH KEY = \"lparname\" WHERE servername =~ /$ServerName/",
|
||||
"definition": "SHOW TAG VALUES FROM \"lpar_processor\" WITH KEY = \"lparname\" WHERE servername =~ /$ServerName/ ",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": "Logical Partition",
|
||||
|
@ -2667,7 +2795,7 @@
|
|||
"multiFormat": "regex values",
|
||||
"name": "LPAR",
|
||||
"options": [],
|
||||
"query": "SHOW TAG VALUES FROM \"lpar_processor\" WITH KEY = \"lparname\" WHERE servername =~ /$ServerName/",
|
||||
"query": "SHOW TAG VALUES FROM \"lpar_processor\" WITH KEY = \"lparname\" WHERE servername =~ /$ServerName/ ",
|
||||
"refresh": 1,
|
||||
"refresh_on_load": false,
|
||||
"regex": "",
|
||||
|
@ -2710,6 +2838,6 @@
|
|||
"timezone": "browser",
|
||||
"title": "HMCi - Power LPAR Overview",
|
||||
"uid": "Xl7oHESGz",
|
||||
"version": 4,
|
||||
"version": 9,
|
||||
"weekStart": ""
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_INFLUXDB",
|
||||
"name": "DS_HMCI",
|
||||
"label": "Database",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
|
@ -15,7 +15,7 @@
|
|||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "9.1.3"
|
||||
"version": "9.1.6"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
|
@ -59,7 +59,7 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"description": "https://bitbucket.org/mnellemann/hmci/",
|
||||
"description": "https://git.data.coop/nellemann/hmci/ - Metrics from IBM Power Systems",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": 1510,
|
||||
|
@ -71,7 +71,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 3,
|
||||
|
@ -81,15 +81,15 @@
|
|||
},
|
||||
"id": 37,
|
||||
"options": {
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information: [bitbucket.org/mnellemann/hmci](https://bitbucket.org/mnellemann/hmci)\n ",
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information visit: [git.data.coop/nellemann/hmci](https://git.data.coop/nellemann/hmci)\n ",
|
||||
"mode": "markdown"
|
||||
},
|
||||
"pluginVersion": "9.1.3",
|
||||
"pluginVersion": "9.1.6",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"refId": "A"
|
||||
}
|
||||
|
@ -100,7 +100,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
|
@ -189,7 +189,7 @@
|
|||
"alias": "$tag_lparname",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"groupBy": [
|
||||
{
|
||||
|
@ -209,7 +209,7 @@
|
|||
"measurement": "/^$ServerName$/",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"maxProcUnits\") AS \"usage\" FROM \"lpar_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"lparname\" =~ /^$LPAR$/) AND $timeFilter GROUP BY time($interval), \"lparname\", \"servername\" fill(none)",
|
||||
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"currentVirtualProcessors\") AS \"usage\" FROM \"lpar_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"lparname\" =~ /^$LPAR$/) AND $timeFilter GROUP BY time($interval), \"lparname\", \"servername\" fill(none)",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
|
@ -237,7 +237,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
|
@ -319,7 +319,7 @@
|
|||
"alias": "$tag_servername - $tag_lparname ($col)",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
|
@ -419,7 +419,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
|
@ -497,7 +497,7 @@
|
|||
"alias": "$tag_servername - $tag_lparname ($col)",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
|
@ -620,7 +620,7 @@
|
|||
"current": {},
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"definition": "SHOW TAG VALUES FROM \"server_processor\" WITH KEY = \"servername\" WHERE time > now() - 24h",
|
||||
"hide": 0,
|
||||
|
@ -644,7 +644,7 @@
|
|||
"current": {},
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"definition": "SHOW TAG VALUES FROM \"lpar_processor\" WITH KEY = \"lparname\" WHERE servername =~ /$ServerName/",
|
||||
"hide": 0,
|
||||
|
@ -697,6 +697,6 @@
|
|||
"timezone": "browser",
|
||||
"title": "HMCi - Power LPAR Utilization",
|
||||
"uid": "jFsbpTH4k",
|
||||
"version": 4,
|
||||
"version": 2,
|
||||
"weekStart": ""
|
||||
}
|
||||
|
|
|
@ -70,7 +70,7 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"description": "https://bitbucket.org/mnellemann/hmci/",
|
||||
"description": "https://git.data.coop/nellemann/hmci/ - Metrics from IBM Power Systems",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"graphTooltip": 0,
|
||||
|
@ -91,7 +91,7 @@
|
|||
},
|
||||
"id": 11,
|
||||
"options": {
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information: [bitbucket.org/mnellemann/hmci](https://bitbucket.org/mnellemann/hmci)\n ",
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information visit: [git.data.coop/nellemann/hmci](https://git.data.coop/nellemann/hmci)\n ",
|
||||
"mode": "markdown"
|
||||
},
|
||||
"pluginVersion": "9.1.6",
|
||||
|
@ -107,6 +107,21 @@
|
|||
"transparent": true,
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": {
|
||||
"h": 1,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 3
|
||||
},
|
||||
"id": 15,
|
||||
"panels": [],
|
||||
"repeat": "ServerName",
|
||||
"repeatDirection": "h",
|
||||
"title": "$ServerName",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
|
@ -140,7 +155,7 @@
|
|||
"h": 7,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 3
|
||||
"y": 4
|
||||
},
|
||||
"id": 7,
|
||||
"options": {
|
||||
|
@ -250,7 +265,7 @@
|
|||
"h": 11,
|
||||
"w": 8,
|
||||
"x": 0,
|
||||
"y": 10
|
||||
"y": 11
|
||||
},
|
||||
"id": 4,
|
||||
"options": {
|
||||
|
@ -453,7 +468,7 @@
|
|||
"h": 11,
|
||||
"w": 16,
|
||||
"x": 8,
|
||||
"y": 10
|
||||
"y": 11
|
||||
},
|
||||
"id": 12,
|
||||
"options": {
|
||||
|
@ -629,7 +644,7 @@
|
|||
"h": 10,
|
||||
"w": 8,
|
||||
"x": 0,
|
||||
"y": 21
|
||||
"y": 22
|
||||
},
|
||||
"id": 13,
|
||||
"options": {
|
||||
|
@ -779,7 +794,7 @@
|
|||
"h": 10,
|
||||
"w": 16,
|
||||
"x": 8,
|
||||
"y": 21
|
||||
"y": 22
|
||||
},
|
||||
"id": 5,
|
||||
"options": {
|
||||
|
@ -874,13 +889,13 @@
|
|||
"type": "influxdb",
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"definition": "SHOW TAG VALUES FROM \"server_processor\" WITH KEY = \"servername\" WHERE time > now() - 24h",
|
||||
"definition": "SHOW TAG VALUES FROM \"server_energy_power\" WITH KEY = \"servername\" WHERE time > now() - 24h",
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"multi": false,
|
||||
"includeAll": true,
|
||||
"multi": true,
|
||||
"name": "ServerName",
|
||||
"options": [],
|
||||
"query": "SHOW TAG VALUES FROM \"server_processor\" WITH KEY = \"servername\" WHERE time > now() - 24h",
|
||||
"query": "SHOW TAG VALUES FROM \"server_energy_power\" WITH KEY = \"servername\" WHERE time > now() - 24h",
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
|
@ -912,6 +927,6 @@
|
|||
"timezone": "",
|
||||
"title": "HMCi - Power System Energy",
|
||||
"uid": "oHcrgD1Mk",
|
||||
"version": 2,
|
||||
"version": 7,
|
||||
"weekStart": ""
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_INFLUXDB",
|
||||
"name": "DS_HMCI",
|
||||
"label": "Database",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
|
@ -77,7 +77,7 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"description": "https://bitbucket.org/mnellemann/hmci/",
|
||||
"description": "https://git.data.coop/nellemann/hmci/ - Metrics from IBM Power Systems",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": 1465,
|
||||
|
@ -90,7 +90,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 3,
|
||||
|
@ -100,7 +100,7 @@
|
|||
},
|
||||
"id": 33,
|
||||
"options": {
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information: [bitbucket.org/mnellemann/hmci](https://bitbucket.org/mnellemann/hmci)\n ",
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information visit: [git.data.coop/nellemann/hmci](https://git.data.coop/nellemann/hmci)\n ",
|
||||
"mode": "markdown"
|
||||
},
|
||||
"pluginVersion": "8.3.5",
|
||||
|
@ -108,7 +108,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"refId": "A"
|
||||
}
|
||||
|
@ -147,7 +147,7 @@
|
|||
"alias": "$tag_servername",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"groupBy": [
|
||||
{
|
||||
|
@ -273,7 +273,7 @@
|
|||
"alias": "$tag_servername",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"groupBy": [
|
||||
{
|
||||
|
@ -381,7 +381,7 @@
|
|||
"alias": "$tag_servername",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"groupBy": [
|
||||
{
|
||||
|
@ -482,7 +482,7 @@
|
|||
"alias": "$tag_servername",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"groupBy": [
|
||||
{
|
||||
|
@ -597,7 +597,7 @@
|
|||
"alias": "$tag_servername",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
|
|
|
@ -71,7 +71,7 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"description": "https://bitbucket.org/mnellemann/hmci/",
|
||||
"description": "https://git.data.coop/nellemann/hmci/ - Metrics from IBM Power Systems",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": 1465,
|
||||
|
@ -93,7 +93,7 @@
|
|||
},
|
||||
"id": 29,
|
||||
"options": {
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information: [bitbucket.org/mnellemann/hmci](https://bitbucket.org/mnellemann/hmci)\n ",
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information visit: [git.data.coop/nellemann/hmci](https://git.data.coop/nellemann/hmci)\n ",
|
||||
"mode": "markdown"
|
||||
},
|
||||
"pluginVersion": "9.1.6",
|
||||
|
@ -445,12 +445,7 @@
|
|||
"show": false
|
||||
},
|
||||
"showHeader": true,
|
||||
"sortBy": [
|
||||
{
|
||||
"desc": true,
|
||||
"displayName": "Utilization"
|
||||
}
|
||||
]
|
||||
"sortBy": []
|
||||
},
|
||||
"pluginVersion": "9.1.6",
|
||||
"targets": [
|
||||
|
@ -472,7 +467,7 @@
|
|||
"measurement": "lpar_details",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT last(\"weight\") AS \"Weight\", last(\"entitledProcUnits\") AS \"Entitled\", last(\"currentVirtualProcessors\") AS \"VP\", (last(\"utilizedProcUnits\") / last(\"maxProcUnits\")) * 100 AS \"Utilization\", last(\"mode\") AS \"Mode\" FROM \"vios_processor\" WHERE (\"servername\" =~ /^$ServerName$/) AND (\"viosname\" =~ /^$ViosName$/) AND $timeFilter GROUP BY \"viosname\" fill(previous)",
|
||||
"query": "SELECT last(\"weight\") AS \"Weight\", last(\"entitledProcUnits\") AS \"Entitled\", last(\"currentVirtualProcessors\") AS \"VP\", (mean(\"utilizedProcUnits\") / mean(\"entitledProcUnits\")) * 100 AS \"Utilization\", last(\"mode\") AS \"Mode\" FROM \"vios_processor\" WHERE (\"servername\" =~ /^$ServerName$/) AND (\"viosname\" =~ /^$ViosName$/) AND $timeFilter GROUP BY \"viosname\" fill(previous)",
|
||||
"queryType": "randomWalk",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
|
@ -1713,7 +1708,7 @@
|
|||
},
|
||||
"definition": "SHOW TAG VALUES FROM \"server_processor\" WITH KEY = \"servername\" WHERE time > now() - 24h",
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"includeAll": true,
|
||||
"label": "Server",
|
||||
"multi": true,
|
||||
"multiFormat": "regex values",
|
||||
|
@ -1786,6 +1781,6 @@
|
|||
"timezone": "browser",
|
||||
"title": "HMCi - Power VIO Overview",
|
||||
"uid": "DDNEv5vGz",
|
||||
"version": 2,
|
||||
"version": 3,
|
||||
"weekStart": ""
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_INFLUXDB",
|
||||
"name": "DS_HMCI",
|
||||
"label": "Database",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
|
@ -21,7 +21,7 @@
|
|||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "9.1.3"
|
||||
"version": "9.1.6"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
|
@ -65,7 +65,7 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"description": "https://bitbucket.org/mnellemann/hmci/",
|
||||
"description": "https://git.data.coop/nellemann/hmci/ - Metrics from IBM Power Systems",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": 1465,
|
||||
|
@ -77,7 +77,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 3,
|
||||
|
@ -87,15 +87,15 @@
|
|||
},
|
||||
"id": 29,
|
||||
"options": {
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information: [bitbucket.org/mnellemann/hmci](https://bitbucket.org/mnellemann/hmci)\n ",
|
||||
"content": "## Metrics collected from IBM Power HMC\n \nFor more information visit: [git.data.coop/nellemann/hmci](https://git.data.coop/nellemann/hmci)\n ",
|
||||
"mode": "markdown"
|
||||
},
|
||||
"pluginVersion": "9.1.3",
|
||||
"pluginVersion": "9.1.6",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"refId": "A"
|
||||
}
|
||||
|
@ -106,7 +106,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
|
@ -155,13 +155,13 @@
|
|||
"showThresholdLabels": false,
|
||||
"showThresholdMarkers": false
|
||||
},
|
||||
"pluginVersion": "9.1.3",
|
||||
"pluginVersion": "9.1.6",
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_servername - $tag_viosname",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
|
@ -194,7 +194,7 @@
|
|||
"measurement": "vios_processor",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT last(\"utilizedProcUnits\") / last(\"maxProcUnits\") AS \"utilization\" FROM \"vios_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"viosname\" =~ /^$ViosName$/) AND $timeFilter GROUP BY time($interval), \"viosname\", \"servername\" fill(none)",
|
||||
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"entitledProcUnits\") AS \"utilization\" FROM \"vios_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"viosname\" =~ /^$ViosName$/) AND $timeFilter GROUP BY time($interval), \"viosname\", \"servername\" fill(none)",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
|
@ -257,7 +257,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
|
@ -344,7 +344,7 @@
|
|||
"alias": "$tag_servername - $tag_viosname",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
|
@ -377,7 +377,7 @@
|
|||
"measurement": "vios_processor",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"maxProcUnits\") AS \"utilization\" FROM \"vios_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"viosname\" =~ /^$ViosName$/) AND $timeFilter GROUP BY time($interval), \"viosname\", \"servername\" fill(none)",
|
||||
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"entitledProcUnits\") AS \"utilization\" FROM \"vios_processor\" WHERE (\"servername\" =~ /^$ServerName$/ AND \"viosname\" =~ /^$ViosName$/) AND $timeFilter GROUP BY time($interval), \"viosname\", \"servername\" fill(none)",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
|
@ -440,7 +440,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
|
@ -527,7 +527,7 @@
|
|||
"alias": "$tag_servername - $tag_viosname ($tag_location - $col)",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
|
@ -645,7 +645,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
|
@ -727,7 +727,7 @@
|
|||
"alias": "$tag_servername - $tag_viosname ($tag_location - $col)",
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
|
@ -860,7 +860,7 @@
|
|||
"current": {},
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"definition": "SHOW TAG VALUES FROM \"server_processor\" WITH KEY = \"servername\" WHERE time > now() - 24h",
|
||||
"hide": 0,
|
||||
|
@ -884,7 +884,7 @@
|
|||
"current": {},
|
||||
"datasource": {
|
||||
"type": "influxdb",
|
||||
"uid": "${DS_INFLUXDB}"
|
||||
"uid": "${DS_HMCI}"
|
||||
},
|
||||
"definition": "SHOW TAG VALUES FROM \"vios_details\" WITH KEY = \"viosname\" WHERE servername =~ /$ServerName/ AND time > now() - 24h",
|
||||
"hide": 0,
|
||||
|
@ -906,7 +906,7 @@
|
|||
]
|
||||
},
|
||||
"time": {
|
||||
"from": "now-2d",
|
||||
"from": "now-7d",
|
||||
"now": false,
|
||||
"to": "now-30s"
|
||||
},
|
||||
|
@ -937,6 +937,6 @@
|
|||
"timezone": "browser",
|
||||
"title": "HMCi - Power VIO Utilization",
|
||||
"uid": "DDNEv5vGy",
|
||||
"version": 10,
|
||||
"version": 2,
|
||||
"weekStart": ""
|
||||
}
|
||||
|
|
|
@ -1,17 +1,25 @@
|
|||
# HMCi Configuration
|
||||
# Copy this file into /etc/hmci.toml and customize it to your environment.
|
||||
|
||||
|
||||
###
|
||||
### Define one InfluxDB to save metrics into
|
||||
### There must be only one and it should be named [influx]
|
||||
###
|
||||
|
||||
# InfluxDB v1.x example
|
||||
#[influx]
|
||||
#url = "http://localhost:8086"
|
||||
#username = "root"
|
||||
#password = ""
|
||||
#database = "hmci"
|
||||
|
||||
|
||||
# InfluxDB v2.x example
|
||||
[influx]
|
||||
url = "http://localhost:8086"
|
||||
username = "root"
|
||||
password = ""
|
||||
database = "hmci"
|
||||
|
||||
org = "myOrg"
|
||||
token = "rAnd0mT0k3nG3neRaT3dByInF1uxDb=="
|
||||
bucket = "hmci"
|
||||
|
||||
|
||||
###
|
||||
|
|
|
@ -1,19 +1,20 @@
|
|||
# Instructions for AIX Systems
|
||||
|
||||
Ensure you have **correct date/time** and NTPd running to keep it accurate!
|
||||
|
||||
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
|
||||
|
||||
More details are available in the [README.md](../README.md) file.
|
||||
|
||||
- Grafana and InfluxDB can be downloaded from the [Power DevOps](https://www.power-devops.com/) website - look under the *Monitor* section.
|
||||
|
||||
- Ensure Java (version 8 or later) is installed and available in your PATH.
|
||||
- Ensure Java (version 8 or later) is installed and available in your PATH (eg. in the */etc/environment* file).
|
||||
|
||||
|
||||
## Download and Install HMCi
|
||||
|
||||
[Download](https://git.data.coop/nellemann/-/packages/generic/hmci/) the latest version of HMCi package for rpm.
|
||||
|
||||
```shell
|
||||
wget https://bitbucket.org/mnellemann/hmci/downloads/hmci-1.3.1-1_all.rpm
|
||||
rpm -i --ignoreos hmci-1.3.1-1_all.rpm
|
||||
rpm -ivh --ignoreos hmci-1.4.2-1_all.rpm
|
||||
cp /opt/hmci/doc/hmci.toml /etc/
|
||||
```
|
||||
|
||||
|
|
|
@ -2,14 +2,14 @@
|
|||
|
||||
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
|
||||
|
||||
More details are available in the [README.md](../README.md) file.
|
||||
Ensure you have **correct date/time** and NTPd running to keep it accurate!
|
||||
|
||||
All commands should be run as root or through sudo.
|
||||
|
||||
## Install the Java Runtime from repository
|
||||
|
||||
```shell
|
||||
apt-get install default-jre-headless
|
||||
apt-get install default-jre-headless wget
|
||||
```
|
||||
|
||||
|
||||
|
@ -25,30 +25,38 @@ systemctl start influxdb
|
|||
|
||||
Run the ```influx``` cli command and create the *hmci* database.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE "hmci" WITH DURATION 365d REPLICATION 1;
|
||||
```
|
||||
|
||||
## Download and Install Grafana
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y adduser libfontconfig1
|
||||
wget https://dl.grafana.com/oss/release/grafana_9.1.3_amd64.deb
|
||||
dpkg -i grafana_9.1.3_amd64.deb
|
||||
apt-get install -y adduser libfontconfig1
|
||||
wget https://dl.grafana.com/oss/release/grafana_9.1.7_amd64.deb
|
||||
dpkg -i grafana_9.1.7_amd64.deb
|
||||
systemctl daemon-reload
|
||||
systemctl enable grafana-server
|
||||
systemctl start grafana-server
|
||||
```
|
||||
|
||||
When logged in to Grafana (port 3000, admin/admin) create a datasource that points to the local InfluxDB. Now import the provided dashboards.
|
||||
|
||||
|
||||
## Download and Install HMCi
|
||||
|
||||
[Download](https://git.data.coop/nellemann/-/packages/generic/hmci/) the latest version of HMCi packaged for deb.
|
||||
|
||||
```shell
|
||||
wget https://bitbucket.org/mnellemann/hmci/downloads/hmci_1.3.1-1_all.deb
|
||||
dpkg -i hmci_1.3.1-1_all.deb
|
||||
wget https://git.data.coop/api/packages/nellemann/generic/hmci/v1.4.2/hmci_1.4.2-1_all.deb
|
||||
dpkg -i hmci_1.4.2-1_all.deb
|
||||
cp /opt/hmci/doc/hmci.toml /etc/
|
||||
cp /opt/hmci/doc/hmci.service /etc/systemd/system/
|
||||
systemctl daemon-reload
|
||||
systemctl enable hmci
|
||||
```
|
||||
|
||||
Now modify */etc/hmci.toml* and test setup by running ```/opt/hmci/bin/hmci -d``` manually and verify connection to HMC and InfluxDB. Afterwards start service with ```systemctl start hmci``` .
|
||||
## Configure HMCi
|
||||
|
||||
Now modify **/etc/hmci.toml** (edit URL and credentials to your HMCs) and test the setup by running ```/opt/hmci/bin/hmci -d``` in the foreground/terminal and look for any errors.
|
||||
|
||||
Press CTRL+C to stop and then start as a background service with ```systemctl start hmci```.
|
||||
|
||||
You can see the log/output by running ```journalctl -f -u hmci```.
|
||||
|
|
|
@ -0,0 +1,40 @@
|
|||
# Grafana Setup
|
||||
|
||||
When installed Grafana listens on [http://localhost:3000](http://localhost:3000) and you can login as user *admin* with password *admin*. Once logged in you are asked to change the default password.
|
||||
|
||||
## Datasource
|
||||
|
||||
- Configure Grafana to use InfluxDB as a new datasource
|
||||
- Name the datasource **hmci** to make it obvious what it contains.
|
||||
- You would typically use *http://localhost:8086* without any credentials.
|
||||
- For InfluxDB 2.x add a custom header: Authorization = Token myTokenFromInfluxDB
|
||||
- The name of the database would be *hmci* (or another name you used when creating it)
|
||||
- **NOTE:** set *Min time interval* to *30s* or *1m* depending on your HMCi *refresh* setting.
|
||||
|
||||
## Dashboards
|
||||
|
||||
Import all or some of the example dashboards from [dashboards/*.json](dashboards/) into Grafana as a starting point and get creative making your own cool dashboards - please share anything useful :)
|
||||
|
||||
- When importing a dashboard, select the **hmci** datasource you have created.
|
||||
|
||||
|
||||
## Security and Proxy
|
||||
|
||||
The easiest way to secure Grafana with https is to put it behind a proxy server such as nginx.
|
||||
|
||||
If you want to serve /grafana as shown below, you also need to edit */etc/grafana/grafana.ini* and change the *root_url*:
|
||||
|
||||
```
|
||||
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/
|
||||
```
|
||||
|
||||
Nginx snippet:
|
||||
|
||||
```nginx
|
||||
location /grafana/ {
|
||||
proxy_pass http://localhost:3000/;
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
# IBM Power HMC Preparations
|
||||
|
||||
Ensure you have **correct date/time** and NTPd running to keep it accurate!
|
||||
|
||||
- Login to your HMC
|
||||
- Navigate to *Console Settings*
|
||||
- Go to *Change Date and Time*
|
||||
- Set correct timezone, if not done already
|
||||
- Configure one or more NTP servers, if not done already
|
||||
- Enable the NTP client, if not done already
|
||||
- Navigate to *Users and Security*
|
||||
- Create a new read-only/viewer **hmci** user, which will be used to connect to the HMC.
|
||||
- Click *Manage User Profiles and Access*, edit the newly created *hmci* user and click *User Properties*:
|
||||
- Set *Session timeout minutes* to **120** (or at least 61 minutes)
|
||||
- Set *Verify timeout minutes* to **15**
|
||||
- Set *Idle timeout minutes* to **15**
|
||||
- Set *Minimum time in days between password changes* to **0**
|
||||
- **Enable** *Allow remote access via the web*
|
||||
- Navigate to *HMC Management* and *Console Settings*
|
||||
- Click *Change Performance Monitoring Settings*:
|
||||
- Enable *Performance Monitoring Data Collection for Managed Servers*: **All On**
|
||||
- Set *Performance Data Storage* to **1** day or preferable more
|
||||
|
||||
If you do not enable *Performance Monitoring Data Collection for Managed Servers*, you will see errors such as *Unexpected response: 403*.
|
||||
|
||||
Use the HMCi debug option (*--debug*) to get more details about what is going on.
|
||||
|
||||
|
||||
## Configure date/time through CLI
|
||||
|
||||
Example showing how you configure related settings through the HMC CLI:
|
||||
|
||||
```shell
|
||||
chhmc -c date -s modify --datetime MMDDhhmm # Set current date/time: MMDDhhmm[[CC]YY][.ss]
|
||||
chhmc -c date -s modify --timezone Europe/Copenhagen # Configure your timezone
|
||||
chhmc -c xntp -s enable # Enable the NTP service
|
||||
chhmc -c xntp -s add -a IP_Addr # Add a remote NTP server
|
||||
```
|
||||
Remember to reboot your HMC after changing the timezone.
|
|
@ -0,0 +1,10 @@
|
|||
# InfluxDB Notes
|
||||
|
||||
|
||||
## Delete data
|
||||
|
||||
To delete *all* data before a specific date, run:
|
||||
|
||||
```sql
|
||||
DELETE WHERE time < '2023-01-01'
|
||||
```
|
|
@ -2,16 +2,16 @@
|
|||
|
||||
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
|
||||
|
||||
More details are available in the [README.md](../README.md) file. If you are running Linux on Power (ppc64le) you should look for ppc64le packages at the [Power DevOps](https://www.power-devops.com/) website.
|
||||
Ensure you have **correct date/time** and NTPd running to keep it accurate!
|
||||
|
||||
All commands should be run as root or through sudo.
|
||||
|
||||
## Install the Java Runtime from repository
|
||||
|
||||
```shell
|
||||
dnf install java-11-openjdk-headless
|
||||
dnf install java-11-openjdk-headless wget
|
||||
# or
|
||||
yum install java-11-openjdk-headless
|
||||
yum install java-11-openjdk-headless wget
|
||||
```
|
||||
|
||||
|
||||
|
@ -24,33 +24,45 @@ systemctl daemon-reload
|
|||
systemctl enable influxdb
|
||||
systemctl start influxdb
|
||||
```
|
||||
If you are running Linux on Power, you can find ppc64le InfluxDB packages on the [Power DevOps](https://www.power-devops.com/influxdb) site. Remember to pick the 1.8 or 1.9 version.
|
||||
|
||||
Run the ```influx``` cli command and create the *hmci* database.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE "hmci" WITH DURATION 365d REPLICATION 1;
|
||||
```
|
||||
|
||||
|
||||
## Download and Install Grafana
|
||||
|
||||
```shell
|
||||
wget https://dl.grafana.com/oss/release/grafana-9.1.3-1.x86_64.rpm
|
||||
rpm -ivh grafana-9.1.3-1.x86_64.rpm
|
||||
wget https://dl.grafana.com/oss/release/grafana-9.1.7-1.x86_64.rpm
|
||||
rpm -ivh grafana-9.1.7-1.x86_64.rpm
|
||||
systemctl daemon-reload
|
||||
systemctl enable grafana-server
|
||||
systemctl start grafana-server
|
||||
```
|
||||
|
||||
When logged in to Grafana (port 3000, admin/admin) create a datasource that points to the local InfluxDB. Now import the provided dashboards.
|
||||
If you are running Linux on Power, you can find ppc64le Grafana packages on the [Power DevOps](https://www.power-devops.com/grafana) site.
|
||||
|
||||
|
||||
## Download and Install HMCi
|
||||
|
||||
[Download](https://git.data.coop/nellemann/-/packages/generic/hmci/) the latest version of HMCi packaged for rpm.
|
||||
|
||||
```shell
|
||||
wget https://bitbucket.org/mnellemann/hmci/downloads/hmci-1.3.1-1_all.rpm
|
||||
rpm -ivh hmci-1.3.1-1_all.rpm
|
||||
wget https://git.data.coop/api/packages/nellemann/generic/hmci/v1.4.4/hmci-1.4.2-1.noarch.rpm
|
||||
rpm -ivh hmci-1.4.4-1_all.rpm
|
||||
cp /opt/hmci/doc/hmci.toml /etc/
|
||||
cp /opt/hmci/doc/hmci.service /etc/systemd/system/
|
||||
systemctl daemon-reload
|
||||
systemctl enable hmci
|
||||
systemctl start hmci
|
||||
```
|
||||
|
||||
Now modify */etc/hmci.toml* and test your setup by running ```/opt/hmci/bin/hmci -d``` manually and verify connection to HMC and InfluxDB. Afterwards start service with ```systemctl start hmci``` .
|
||||
## Configure HMCi
|
||||
|
||||
Now modify **/etc/hmci.toml** (edit URL and credentials to your HMCs) and test the setup by running ```/opt/hmci/bin/hmci -d``` in the foreground/terminal and look for any errors.
|
||||
|
||||
Press CTRL+C to stop and then start as a background service with ```systemctl start hmci```.
|
||||
|
||||
You can see the log/output by running ```journalctl -f -u hmci```.
|
||||
|
|
|
@ -2,14 +2,14 @@
|
|||
|
||||
Please note that the software versions referenced in this document might have changed and might not be available/working unless updated.
|
||||
|
||||
More details are available in the [README.md](../README.md) file. If you are running Linux on Power (ppc64le) you should look for ppc64le packages at the [Power DevOps](https://www.power-devops.com/) website.
|
||||
Ensure you have **correct date/time** and NTPd running to keep it accurate!
|
||||
|
||||
All commands should be run as root or through sudo.
|
||||
|
||||
## Install the Java Runtime from repository
|
||||
|
||||
```shell
|
||||
zypper install java-11-openjdk-headless
|
||||
zypper install java-11-openjdk-headless wget
|
||||
```
|
||||
|
||||
|
||||
|
@ -23,31 +23,47 @@ systemctl enable influxdb
|
|||
systemctl start influxdb
|
||||
```
|
||||
|
||||
If you are running Linux on Power, you can find ppc64le InfluxDB packages on the [Power DevOps](https://www.power-devops.com/influxdb) site. Remember to pick the 1.8 or 1.9 version.
|
||||
|
||||
Run the ```influx``` cli command and create the *hmci* database.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE "hmci" WITH DURATION 365d REPLICATION 1;
|
||||
```
|
||||
|
||||
|
||||
## Download and Install Grafana
|
||||
|
||||
```shell
|
||||
wget https://dl.grafana.com/oss/release/grafana-9.1.3-1.x86_64.rpm
|
||||
rpm -ivh --nodeps grafana-9.1.3-1.x86_64.rpm
|
||||
wget https://dl.grafana.com/oss/release/grafana-9.1.7-1.x86_64.rpm
|
||||
rpm -ivh --nodeps grafana-9.1.7-1.x86_64.rpm
|
||||
systemctl daemon-reload
|
||||
systemctl enable grafana-server
|
||||
systemctl start grafana-server
|
||||
```
|
||||
|
||||
When logged in to Grafana (port 3000, admin/admin) create a datasource that points to the local InfluxDB. Now import the provided dashboards.
|
||||
If you are running Linux on Power, you can find ppc64le Grafana packages on the [Power DevOps](https://www.power-devops.com/grafana) site.
|
||||
|
||||
|
||||
## Download and Install HMCi
|
||||
|
||||
[Download](https://git.data.coop/nellemann/-/packages/generic/hmci/) the latest version of HMCi packaged for rpm.
|
||||
|
||||
```shell
|
||||
wget https://bitbucket.org/mnellemann/hmci/downloads/hmci-1.3.1-1_all.rpm
|
||||
rpm -ivh hmci-1.3.1-1_all.rpm
|
||||
wget https://git.data.coop/api/packages/nellemann/generic/hmci/v1.4.2/hmci-1.4.2-1.noarch.rpm
|
||||
rpm -ivh hmci-1.4.2-1_all.rpm
|
||||
cp /opt/hmci/doc/hmci.toml /etc/
|
||||
cp /opt/hmci/doc/hmci.service /etc/systemd/system/
|
||||
systemctl daemon-reload
|
||||
systemctl enable hmci
|
||||
```
|
||||
|
||||
Now modify */etc/hmci.toml* and test your setup by running ```/opt/hmci/bin/hmci -d``` manually and verify connection to HMC and InfluxDB. Afterwards start service with ```systemctl start hmci``` .
|
||||
## Configure HMCi
|
||||
|
||||
Now modify **/etc/hmci.toml** (edit URL and credentials to your HMCs) and test the setup by running ```/opt/hmci/bin/hmci -d``` in the foreground/terminal and look for any errors.
|
||||
|
||||
Press CTRL+C to stop and then start as a background service with ```systemctl start hmci```.
|
||||
|
||||
You can see the log/output by running ```journalctl -f -u hmci```.
|
||||
|
||||
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
projectId = hmci
|
||||
projectGroup = biz.nellemann.hmci
|
||||
projectVersion = 1.4.1
|
||||
projectVersion = 1.4.5
|
||||
|
|
Binary file not shown.
|
@ -1,5 +1,5 @@
|
|||
distributionBase=GRADLE_USER_HOME
|
||||
distributionPath=wrapper/dists
|
||||
distributionUrl=https\://services.gradle.org/distributions/gradle-7.5.1-bin.zip
|
||||
distributionUrl=https\://services.gradle.org/distributions/gradle-7.6-bin.zip
|
||||
zipStoreBase=GRADLE_USER_HOME
|
||||
zipStorePath=wrapper/dists
|
||||
|
|
|
@ -205,6 +205,12 @@ set -- \
|
|||
org.gradle.wrapper.GradleWrapperMain \
|
||||
"$@"
|
||||
|
||||
# Stop when "xargs" is not available.
|
||||
if ! command -v xargs >/dev/null 2>&1
|
||||
then
|
||||
die "xargs is not available"
|
||||
fi
|
||||
|
||||
# Use "xargs" to parse quoted args.
|
||||
#
|
||||
# With -n1 it outputs one arg per line, with the quotes and backslashes removed.
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
@rem limitations under the License.
|
||||
@rem
|
||||
|
||||
@if "%DEBUG%" == "" @echo off
|
||||
@if "%DEBUG%"=="" @echo off
|
||||
@rem ##########################################################################
|
||||
@rem
|
||||
@rem Gradle startup script for Windows
|
||||
|
@ -25,7 +25,7 @@
|
|||
if "%OS%"=="Windows_NT" setlocal
|
||||
|
||||
set DIRNAME=%~dp0
|
||||
if "%DIRNAME%" == "" set DIRNAME=.
|
||||
if "%DIRNAME%"=="" set DIRNAME=.
|
||||
set APP_BASE_NAME=%~n0
|
||||
set APP_HOME=%DIRNAME%
|
||||
|
||||
|
@ -40,7 +40,7 @@ if defined JAVA_HOME goto findJavaFromJavaHome
|
|||
|
||||
set JAVA_EXE=java.exe
|
||||
%JAVA_EXE% -version >NUL 2>&1
|
||||
if "%ERRORLEVEL%" == "0" goto execute
|
||||
if %ERRORLEVEL% equ 0 goto execute
|
||||
|
||||
echo.
|
||||
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
|
||||
|
@ -75,13 +75,15 @@ set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
|
|||
|
||||
:end
|
||||
@rem End local scope for the variables with windows NT shell
|
||||
if "%ERRORLEVEL%"=="0" goto mainEnd
|
||||
if %ERRORLEVEL% equ 0 goto mainEnd
|
||||
|
||||
:fail
|
||||
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
|
||||
rem the _cmd.exe /c_ return code!
|
||||
if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
|
||||
exit /b 1
|
||||
set EXIT_CODE=%ERRORLEVEL%
|
||||
if %EXIT_CODE% equ 0 set EXIT_CODE=1
|
||||
if not ""=="%GRADLE_EXIT_CONSOLE%" exit %EXIT_CODE%
|
||||
exit /b %EXIT_CODE%
|
||||
|
||||
:mainEnd
|
||||
if "%OS%"=="Windows_NT" endlocal
|
||||
|
|
|
@ -15,17 +15,19 @@
|
|||
*/
|
||||
package biz.nellemann.hmci;
|
||||
|
||||
import biz.nellemann.hmci.dto.toml.Configuration;
|
||||
import com.fasterxml.jackson.dataformat.toml.TomlMapper;
|
||||
import picocli.CommandLine;
|
||||
import picocli.CommandLine.Option;
|
||||
import picocli.CommandLine.Command;
|
||||
|
||||
import java.io.File;
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.Callable;
|
||||
|
||||
import com.fasterxml.jackson.dataformat.toml.TomlMapper;
|
||||
|
||||
import biz.nellemann.hmci.dto.toml.Configuration;
|
||||
import picocli.CommandLine;
|
||||
import picocli.CommandLine.Command;
|
||||
import picocli.CommandLine.Option;
|
||||
|
||||
@Command(name = "hmci",
|
||||
mixinStandardHelpOptions = true,
|
||||
versionProvider = biz.nellemann.hmci.VersionProvider.class,
|
||||
|
@ -90,7 +92,7 @@ public class Application implements Callable<Integer> {
|
|||
}
|
||||
|
||||
influxClient.logoff();
|
||||
} catch (Exception e) {
|
||||
} catch (IOException | InterruptedException e) {
|
||||
System.err.println(e.getMessage());
|
||||
return 1;
|
||||
}
|
||||
|
|
|
@ -15,67 +15,84 @@
|
|||
*/
|
||||
package biz.nellemann.hmci;
|
||||
|
||||
import biz.nellemann.hmci.dto.toml.InfluxConfiguration;
|
||||
import org.influxdb.BatchOptions;
|
||||
import org.influxdb.InfluxDB;
|
||||
import org.influxdb.InfluxDBFactory;
|
||||
import org.influxdb.dto.Point;
|
||||
import static java.lang.Thread.sleep;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import com.influxdb.client.InfluxDBClient;
|
||||
import com.influxdb.client.InfluxDBClientFactory;
|
||||
import com.influxdb.client.WriteApi;
|
||||
import com.influxdb.client.WriteOptions;
|
||||
import com.influxdb.client.domain.WritePrecision;
|
||||
import com.influxdb.client.write.Point;
|
||||
|
||||
import biz.nellemann.hmci.dto.toml.InfluxConfiguration;
|
||||
|
||||
import static java.lang.Thread.sleep;
|
||||
|
||||
public final class InfluxClient {
|
||||
|
||||
private final static Logger log = LoggerFactory.getLogger(InfluxClient.class);
|
||||
|
||||
final private String url;
|
||||
final private String username;
|
||||
final private String password;
|
||||
final private String database;
|
||||
final private String org; // v2 only
|
||||
final private String token;
|
||||
final private String bucket; // Bucket in v2, Database in v1
|
||||
|
||||
|
||||
private InfluxDBClient influxDBClient;
|
||||
private WriteApi writeApi;
|
||||
|
||||
private InfluxDB influxDB;
|
||||
|
||||
InfluxClient(InfluxConfiguration config) {
|
||||
this.url = config.url;
|
||||
this.username = config.username;
|
||||
this.password = config.password;
|
||||
this.database = config.database;
|
||||
if(config.org != null) {
|
||||
this.org = config.org;
|
||||
} else {
|
||||
this.org = "hmci"; // In InfluxDB 1.x, there is no concept of organization.
|
||||
}
|
||||
if(config.token != null) {
|
||||
this.token = config.token;
|
||||
} else {
|
||||
this.token = config.username + ":" + config.password;
|
||||
}
|
||||
if(config.bucket != null) {
|
||||
this.bucket = config.bucket;
|
||||
} else {
|
||||
this.bucket = config.database;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
synchronized void login() throws RuntimeException, InterruptedException {
|
||||
|
||||
if(influxDB != null) {
|
||||
if(influxDBClient != null) {
|
||||
return;
|
||||
}
|
||||
|
||||
boolean connected = false;
|
||||
int loginErrors = 0;
|
||||
|
||||
|
||||
do {
|
||||
try {
|
||||
log.debug("Connecting to InfluxDB - {}", url);
|
||||
influxDB = InfluxDBFactory.connect(url, username, password).setDatabase(database);
|
||||
influxDB.version(); // This ensures that we actually try to connect to the db
|
||||
influxDBClient = InfluxDBClientFactory.create(url, token.toCharArray(), org, bucket);
|
||||
influxDBClient.version(); // This ensures that we actually try to connect to the db
|
||||
Runtime.getRuntime().addShutdownHook(new Thread(influxDBClient::close));
|
||||
|
||||
influxDB.enableBatch(
|
||||
BatchOptions.DEFAULTS
|
||||
.flushDuration(5000)
|
||||
.threadFactory(runnable -> {
|
||||
Thread thread = new Thread(runnable);
|
||||
thread.setDaemon(true);
|
||||
return thread;
|
||||
})
|
||||
);
|
||||
Runtime.getRuntime().addShutdownHook(new Thread(influxDB::close));
|
||||
// Todo: Handle events - https://github.com/influxdata/influxdb-client-java/tree/master/client#handle-the-events
|
||||
writeApi = influxDBClient.makeWriteApi(
|
||||
WriteOptions.builder()
|
||||
.batchSize(15_000)
|
||||
.bufferLimit(500_000)
|
||||
.flushInterval(5_000)
|
||||
.build());
|
||||
|
||||
connected = true;
|
||||
|
||||
} catch(Exception e) {
|
||||
sleep(15 * 1000);
|
||||
if(loginErrors++ > 3) {
|
||||
|
@ -91,52 +108,32 @@ public final class InfluxClient {
|
|||
|
||||
|
||||
synchronized void logoff() {
|
||||
if(influxDB != null) {
|
||||
influxDB.close();
|
||||
if(influxDBClient != null) {
|
||||
influxDBClient.close();
|
||||
}
|
||||
influxDB = null;
|
||||
influxDBClient = null;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
public void write(List<Measurement> measurements, Instant timestamp, String name) {
|
||||
log.debug("write() - measurement: {} {}", name, measurements.size());
|
||||
processMeasurementMap(measurements, timestamp, name).forEach( (point) -> { influxDB.write(point); });
|
||||
}*/
|
||||
|
||||
|
||||
public void write(List<Measurement> measurements, String name) {
|
||||
log.debug("write() - measurement: {} {}", name, measurements.size());
|
||||
processMeasurementMap(measurements, name).forEach( (point) -> { influxDB.write(point); });
|
||||
if(!measurements.isEmpty()) {
|
||||
processMeasurementMap(measurements, name).forEach((point) -> {
|
||||
writeApi.writePoint(point);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
private List<Point> processMeasurementMap(List<Measurement> measurements, Instant timestamp, String name) {
|
||||
List<Point> listOfPoints = new ArrayList<>();
|
||||
measurements.forEach( (m) -> {
|
||||
|
||||
Point.Builder builder = Point.measurement(name)
|
||||
.time(timestamp.getEpochSecond(), TimeUnit.SECONDS)
|
||||
.tag(m.tags)
|
||||
.fields(m.fields);
|
||||
|
||||
listOfPoints.add(builder.build());
|
||||
});
|
||||
|
||||
return listOfPoints;
|
||||
}*/
|
||||
|
||||
|
||||
private List<Point> processMeasurementMap(List<Measurement> measurements, String name) {
|
||||
List<Point> listOfPoints = new ArrayList<>();
|
||||
measurements.forEach( (m) -> {
|
||||
log.trace("processMeasurementMap() - timestamp: {}, tags: {}, fields: {}", m.timestamp, m.tags, m.fields);
|
||||
Point.Builder builder = Point.measurement(name)
|
||||
.time(m.timestamp.getEpochSecond(), TimeUnit.SECONDS)
|
||||
.tag(m.tags)
|
||||
.fields(m.fields);
|
||||
listOfPoints.add(builder.build());
|
||||
Point point = new Point(name)
|
||||
.time(m.timestamp.getEpochSecond(), WritePrecision.S)
|
||||
.addTags(m.tags)
|
||||
.addFields(m.fields);
|
||||
listOfPoints.add(point);
|
||||
});
|
||||
return listOfPoints;
|
||||
}
|
||||
|
|
|
@ -151,6 +151,7 @@ class LogicalPartition extends Resource {
|
|||
// LPAR Details
|
||||
List<Measurement> getDetails(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getDetails()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
Map<String, String> tagsMap = new HashMap<>();
|
||||
|
@ -175,7 +176,7 @@ class LogicalPartition extends Resource {
|
|||
|
||||
// LPAR Memory
|
||||
List<Measurement> getMemoryMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getMemoryMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
Map<String, String> tagsMap = new HashMap<>();
|
||||
|
@ -197,7 +198,7 @@ class LogicalPartition extends Resource {
|
|||
|
||||
// LPAR Processor
|
||||
List<Measurement> getProcessorMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getProcessorMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
HashMap<String, String> tagsMap = new HashMap<>();
|
||||
|
@ -231,7 +232,7 @@ class LogicalPartition extends Resource {
|
|||
|
||||
// LPAR Network - Virtual
|
||||
List<Measurement> getVirtualEthernetAdapterMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVirtualEthernetAdapterMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
metric.getSample(sample).lparsUtil.network.virtualEthernetAdapters.forEach(adapter -> {
|
||||
|
@ -272,7 +273,7 @@ class LogicalPartition extends Resource {
|
|||
|
||||
// LPAR Storage - Virtual Generic
|
||||
List<Measurement> getVirtualGenericAdapterMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVirtualGenericAdapterMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
metric.getSample(sample).lparsUtil.storage.genericVirtualAdapters.forEach(adapter -> {
|
||||
|
@ -303,7 +304,7 @@ class LogicalPartition extends Resource {
|
|||
|
||||
// LPAR Storage - Virtual FC
|
||||
List<Measurement> getVirtualFibreChannelAdapterMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVirtualFibreChannelAdapterMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
metric.getSample(sample).lparsUtil.storage.virtualFiberChannelAdapters.forEach(adapter -> {
|
||||
|
@ -334,7 +335,7 @@ class LogicalPartition extends Resource {
|
|||
|
||||
// LPAR Network - SR-IOV Logical Ports
|
||||
List<Measurement> getSriovLogicalPorts(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getSriovLogicalPorts()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
metric.getSample(sample).lparsUtil.network.sriovLogicalPorts.forEach(port -> {
|
||||
|
@ -345,7 +346,6 @@ class LogicalPartition extends Resource {
|
|||
tagsMap.put("servername", managedSystem.entry.getName());
|
||||
tagsMap.put("lparname", entry.getName());
|
||||
tagsMap.put("location", port.physicalLocation);
|
||||
tagsMap.put("type", port.configurationType);
|
||||
log.trace("getSriovLogicalPorts() - tags: " + tagsMap);
|
||||
|
||||
fieldsMap.put("sentBytes", port.sentBytes);
|
||||
|
|
|
@ -291,7 +291,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// System details
|
||||
List<Measurement> getDetails(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getDetails()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
Map<String, String> tagsMap = new TreeMap<>();
|
||||
Map<String, Object> fieldsMap = new TreeMap<>();
|
||||
|
@ -321,7 +321,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// System Memory
|
||||
List<Measurement> getMemoryMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getMemoryMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
HashMap<String, String> tagsMap = new HashMap<>();
|
||||
Map<String, Object> fieldsMap = new HashMap<>();
|
||||
|
@ -344,7 +344,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// System Processor
|
||||
List<Measurement> getProcessorMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getProcessorMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
HashMap<String, String> tagsMap = new HashMap<>();
|
||||
HashMap<String, Object> fieldsMap = new HashMap<>();
|
||||
|
@ -365,7 +365,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// Sytem Shared ProcessorPools
|
||||
List<Measurement> getSharedProcessorPools(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getSharedProcessorPools()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).serverUtil.sharedProcessorPool.forEach(sharedProcessorPool -> {
|
||||
HashMap<String, String> tagsMap = new HashMap<>();
|
||||
|
@ -392,7 +392,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// System Physical ProcessorPool
|
||||
List<Measurement> getPhysicalProcessorPool(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getPhysicalProcessorPool()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
HashMap<String, String> tagsMap = new HashMap<>();
|
||||
HashMap<String, Object> fieldsMap = new HashMap<>();
|
||||
|
@ -420,7 +420,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Details
|
||||
List<Measurement> getVioDetails(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioDetails()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach(vio -> {
|
||||
|
||||
|
@ -446,7 +446,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Memory
|
||||
List<Measurement> getVioMemoryMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioMemoryMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach(vio -> {
|
||||
|
||||
|
@ -474,7 +474,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Processor
|
||||
List<Measurement> getVioProcessorMetrics(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioProcessorMetrics()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach(vio -> {
|
||||
|
||||
|
@ -509,7 +509,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIOs - Network
|
||||
List<Measurement> getVioNetworkLpars(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioNetworkLpars()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach(vio -> {
|
||||
|
||||
|
@ -532,7 +532,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Network - Shared
|
||||
List<Measurement> getVioNetworkSharedAdapters(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioNetworkSharedAdapters()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach(vio -> {
|
||||
vio.network.sharedAdapters.forEach(adapter -> {
|
||||
|
@ -565,7 +565,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Network - Virtual
|
||||
List<Measurement> getVioNetworkVirtualAdapters(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioNetworkVirtualAdapters()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach( vio -> {
|
||||
vio.network.virtualEthernetAdapters.forEach( adapter -> {
|
||||
|
@ -605,7 +605,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Network - Generic
|
||||
List<Measurement> getVioNetworkGenericAdapters(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioNetworkGenericAdapters()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach( vio -> {
|
||||
vio.network.genericAdapters.forEach( adapter -> {
|
||||
|
@ -637,7 +637,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIOs - Storage
|
||||
List<Measurement> getVioStorageLpars(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioStorageLpars()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach(vio -> {
|
||||
|
||||
|
@ -660,7 +660,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Storage FC
|
||||
List<Measurement> getVioStorageFiberChannelAdapters(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioStorageFiberChannelAdapters()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach( vio -> {
|
||||
log.trace("getVioStorageFiberChannelAdapters() - VIO: " + vio.name);
|
||||
|
@ -694,8 +694,9 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Storage - Physical
|
||||
List<Measurement> getVioStoragePhysicalAdapters(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioStoragePhysicalAdapters()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
|
||||
metric.getSample(sample).viosUtil.forEach( vio -> {
|
||||
log.trace("getVioStoragePhysicalAdapters() - VIO: " + vio.name);
|
||||
|
||||
|
@ -728,7 +729,7 @@ class ManagedSystem extends Resource {
|
|||
|
||||
// VIO Storage - Virtual
|
||||
List<Measurement> getVioStorageVirtualAdapters(int sample) throws NullPointerException {
|
||||
|
||||
log.debug("getVioStorageVirtualAdapters()");
|
||||
List<Measurement> list = new ArrayList<>();
|
||||
metric.getSample(sample).viosUtil.forEach( (vio) -> {
|
||||
vio.storage.genericVirtualAdapters.forEach( (adapter) -> {
|
||||
|
|
|
@ -15,22 +15,25 @@
|
|||
*/
|
||||
package biz.nellemann.hmci;
|
||||
|
||||
import java.io.IOException;
|
||||
import static java.lang.Thread.sleep;
|
||||
import java.time.Duration;
|
||||
import java.time.Instant;
|
||||
import java.time.temporal.ChronoUnit;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Objects;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
|
||||
import biz.nellemann.hmci.dto.toml.HmcConfiguration;
|
||||
import biz.nellemann.hmci.dto.xml.Link;
|
||||
import biz.nellemann.hmci.dto.xml.ManagementConsoleEntry;
|
||||
import biz.nellemann.hmci.dto.xml.XmlFeed;
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.io.File;
|
||||
import java.time.Duration;
|
||||
import java.time.Instant;
|
||||
import java.time.temporal.ChronoUnit;
|
||||
import java.util.*;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
|
||||
import static java.lang.Thread.sleep;
|
||||
|
||||
class ManagementConsole implements Runnable {
|
||||
|
||||
|
@ -171,7 +174,7 @@ class ManagementConsole implements Runnable {
|
|||
}
|
||||
}
|
||||
|
||||
} catch (Exception e) {
|
||||
} catch (IOException e) {
|
||||
log.warn("discover() - error: {}", e.getMessage());
|
||||
}
|
||||
|
||||
|
|
|
@ -21,9 +21,9 @@ public abstract class Resource {
|
|||
private final ArrayList<String> sampleHistory = new ArrayList<>();
|
||||
|
||||
protected SystemUtil metric;
|
||||
protected final int maxNumberOfSamples = 60;
|
||||
protected final int minNumberOfSamples = 5;
|
||||
protected int noOfSamples = maxNumberOfSamples;
|
||||
protected final int MAX_NUMBER_OF_SAMPLES = 60;
|
||||
protected final int MIN_NUMBER_OF_SAMPLES = 5;
|
||||
protected int noOfSamples = MAX_NUMBER_OF_SAMPLES;
|
||||
|
||||
|
||||
|
||||
|
@ -114,7 +114,7 @@ public abstract class Resource {
|
|||
processed++;
|
||||
sampleHistory.add(timestamp); // Add to processed history
|
||||
} catch (NullPointerException e) {
|
||||
log.warn("process() - error: {}", e.getMessage());
|
||||
log.warn("process() - error", e);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -125,8 +125,8 @@ public abstract class Resource {
|
|||
}
|
||||
|
||||
// Decrease down to minSamples
|
||||
if(noOfSamples > minNumberOfSamples) {
|
||||
noOfSamples = Math.min( (noOfSamples - 1), Math.max( (noOfSamples - processed) + 5, minNumberOfSamples));
|
||||
if(noOfSamples > MIN_NUMBER_OF_SAMPLES) {
|
||||
noOfSamples = Math.min( (noOfSamples - 1), Math.max( (noOfSamples - processed) + 5, MIN_NUMBER_OF_SAMPLES));
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -1,23 +1,33 @@
|
|||
package biz.nellemann.hmci;
|
||||
|
||||
import biz.nellemann.hmci.dto.xml.LogonResponse;
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
import okhttp3.*;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import java.io.IOException;
|
||||
import java.net.MalformedURLException;
|
||||
import java.net.URL;
|
||||
import java.security.KeyManagementException;
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.security.SecureRandom;
|
||||
import java.security.cert.X509Certificate;
|
||||
import java.time.Instant;
|
||||
import java.time.temporal.ChronoUnit;
|
||||
import java.util.Objects;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import javax.net.ssl.SSLContext;
|
||||
import javax.net.ssl.SSLSocketFactory;
|
||||
import javax.net.ssl.TrustManager;
|
||||
import javax.net.ssl.X509TrustManager;
|
||||
import java.io.*;
|
||||
import java.net.*;
|
||||
import java.security.KeyManagementException;
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.security.SecureRandom;
|
||||
import java.security.cert.X509Certificate;
|
||||
import java.util.Objects;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
|
||||
import biz.nellemann.hmci.dto.xml.LogonResponse;
|
||||
import okhttp3.MediaType;
|
||||
import okhttp3.OkHttpClient;
|
||||
import okhttp3.Request;
|
||||
import okhttp3.RequestBody;
|
||||
import okhttp3.Response;
|
||||
|
||||
public class RestClient {
|
||||
|
||||
|
@ -38,6 +48,9 @@ public class RestClient {
|
|||
protected final String username;
|
||||
protected final String password;
|
||||
|
||||
private final static int MAX_MINUTES_BETWEEN_AUTHENTICATION = 60; // TODO: Make configurable and match HMC timeout settings
|
||||
private Instant lastAuthenticationTimestamp;
|
||||
|
||||
|
||||
public RestClient(String baseUrl, String username, String password, Boolean trustAll) {
|
||||
this.baseUrl = baseUrl;
|
||||
|
@ -63,6 +76,8 @@ public class RestClient {
|
|||
log.error("ManagementConsole() - trace error: " + e.getMessage());
|
||||
}
|
||||
}*/
|
||||
Thread shutdownHook = new Thread(this::logoff);
|
||||
Runtime.getRuntime().addShutdownHook(shutdownHook);
|
||||
}
|
||||
|
||||
|
||||
|
@ -70,6 +85,9 @@ public class RestClient {
|
|||
* Logon to the HMC and get an authentication token for further requests.
|
||||
*/
|
||||
public synchronized void login() {
|
||||
if(authToken != null) {
|
||||
logoff();
|
||||
}
|
||||
|
||||
log.info("Connecting to HMC - {} @ {}", username, baseUrl);
|
||||
StringBuilder payload = new StringBuilder();
|
||||
|
@ -102,10 +120,12 @@ public class RestClient {
|
|||
LogonResponse logonResponse = xmlMapper.readValue(responseBody, LogonResponse.class);
|
||||
|
||||
authToken = logonResponse.getToken();
|
||||
lastAuthenticationTimestamp = Instant.now();
|
||||
log.debug("logon() - auth token: {}", authToken);
|
||||
|
||||
} catch (Exception e) {
|
||||
log.warn("logon() - error: {}", e.getMessage());
|
||||
lastAuthenticationTimestamp = null;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -131,13 +151,12 @@ public class RestClient {
|
|||
.delete()
|
||||
.build();
|
||||
|
||||
String responseBody;
|
||||
try (Response response = httpClient.newCall(request).execute()) {
|
||||
responseBody = Objects.requireNonNull(response.body()).string();
|
||||
} catch (IOException e) {
|
||||
log.warn("logoff() error: {}", e.getMessage());
|
||||
} finally {
|
||||
authToken = null;
|
||||
lastAuthenticationTimestamp = null;
|
||||
}
|
||||
|
||||
} catch (MalformedURLException e) {
|
||||
|
@ -162,10 +181,14 @@ public class RestClient {
|
|||
* Return a Response from the HMC
|
||||
* @param url to get Response from
|
||||
* @return Response body string
|
||||
* @throws IOException
|
||||
*/
|
||||
public synchronized String getRequest(URL url) throws IOException {
|
||||
|
||||
log.debug("getRequest() - URL: {}", url.toString());
|
||||
if (lastAuthenticationTimestamp == null || lastAuthenticationTimestamp.plus(MAX_MINUTES_BETWEEN_AUTHENTICATION, ChronoUnit.MINUTES).isBefore(Instant.now())) {
|
||||
login();
|
||||
}
|
||||
|
||||
Request request = new Request.Builder()
|
||||
.url(url)
|
||||
|
@ -220,10 +243,18 @@ public class RestClient {
|
|||
|
||||
/**
|
||||
* Send a POST request with a payload (can be null) to the HMC
|
||||
* @param url
|
||||
* @param payload
|
||||
* @return Response body string
|
||||
* @throws IOException
|
||||
*/
|
||||
public synchronized String postRequest(URL url, String payload) throws IOException {
|
||||
|
||||
log.debug("sendPostRequest() - URL: {}", url.toString());
|
||||
if (lastAuthenticationTimestamp == null || lastAuthenticationTimestamp.plus(MAX_MINUTES_BETWEEN_AUTHENTICATION, ChronoUnit.MINUTES).isBefore(Instant.now())) {
|
||||
login();
|
||||
}
|
||||
|
||||
RequestBody requestBody;
|
||||
if(payload != null) {
|
||||
requestBody = RequestBody.create(payload, MEDIA_TYPE_IBM_XML_POST);
|
||||
|
|
|
@ -1,14 +1,20 @@
|
|||
package biz.nellemann.hmci;
|
||||
|
||||
import biz.nellemann.hmci.dto.xml.Link;
|
||||
import biz.nellemann.hmci.dto.xml.XmlFeed;
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.net.URI;
|
||||
import java.util.*;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
|
||||
import biz.nellemann.hmci.dto.xml.Link;
|
||||
import biz.nellemann.hmci.dto.xml.XmlFeed;
|
||||
|
||||
class SystemEnergy extends Resource {
|
||||
|
||||
|
|
|
@ -15,12 +15,12 @@
|
|||
*/
|
||||
package biz.nellemann.hmci;
|
||||
|
||||
import picocli.CommandLine;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.jar.Attributes;
|
||||
import java.util.jar.Manifest;
|
||||
|
||||
import picocli.CommandLine;
|
||||
|
||||
class VersionProvider implements CommandLine.IVersionProvider {
|
||||
|
||||
@Override
|
||||
|
|
|
@ -1,13 +1,16 @@
|
|||
package biz.nellemann.hmci;
|
||||
|
||||
import biz.nellemann.hmci.dto.xml.VirtualIOServerEntry;
|
||||
import biz.nellemann.hmci.dto.xml.XmlEntry;
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
import java.io.IOException;
|
||||
import java.net.URI;
|
||||
import java.net.URISyntaxException;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.net.URI;
|
||||
import java.net.URISyntaxException;
|
||||
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
|
||||
|
||||
import biz.nellemann.hmci.dto.xml.VirtualIOServerEntry;
|
||||
import biz.nellemann.hmci.dto.xml.XmlEntry;
|
||||
|
||||
public class VirtualIOServer {
|
||||
private final static Logger log = LoggerFactory.getLogger(VirtualIOServer.class);
|
||||
|
@ -58,7 +61,7 @@ public class VirtualIOServer {
|
|||
throw new UnsupportedOperationException("Failed to deserialize VirtualIOServer");
|
||||
}
|
||||
|
||||
} catch (Exception e) {
|
||||
} catch (IOException e) {
|
||||
log.error("discover() - error: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class EnergyUtil {
|
||||
|
||||
public PowerUtil powerUtil = new PowerUtil();
|
||||
|
|
|
@ -1,10 +1,13 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
/**
|
||||
* Storage adapter
|
||||
*/
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class FiberChannelAdapter {
|
||||
|
||||
public String id;
|
||||
|
|
|
@ -1,6 +1,9 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class GenericAdapter {
|
||||
|
||||
public String id;
|
||||
|
|
|
@ -1,10 +1,13 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class GenericPhysicalAdapters {
|
||||
|
||||
public String id;
|
||||
public String type;
|
||||
public String type = "";
|
||||
public String physicalLocation;
|
||||
public double numOfReads;
|
||||
public double numOfWrites;
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnore;
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
/**
|
||||
* Storage adapter
|
||||
*/
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class GenericVirtualAdapter {
|
||||
|
||||
public String id = "";
|
||||
|
|
|
@ -1,6 +1,9 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class LparProcessor {
|
||||
|
||||
public Integer poolId = 0;
|
||||
|
|
|
@ -1,9 +1,12 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class Network {
|
||||
|
||||
public List<String> clientLpars = new ArrayList<>();
|
||||
|
|
|
@ -1,6 +1,9 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class PhysicalProcessorPool {
|
||||
|
||||
public double assignedProcUnits = 0.0;
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class PowerUtil {
|
||||
|
||||
public Number powerReading = 0.0;
|
||||
public float powerReading = 0.0F;
|
||||
|
||||
}
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public class ProcessedMetrics {
|
||||
|
||||
public SystemUtil systemUtil;
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class SRIOVAdapter {
|
||||
|
||||
public String drcIndex = "";
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public class SRIOVLogicalPort {
|
||||
|
||||
public String drcIndex;
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class SRIOVPhysicalPort {
|
||||
|
||||
public String id;
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class SampleInfo {
|
||||
|
||||
@JsonProperty("timeStamp")
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class ServerMemory {
|
||||
|
||||
public double totalMem = 0.0;
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class ServerProcessor {
|
||||
|
||||
public Double totalProcUnits = 0.0;
|
||||
|
|
|
@ -1,9 +1,12 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class ServerUtil {
|
||||
|
||||
public final ServerProcessor processor = new ServerProcessor();
|
||||
|
|
|
@ -1,12 +1,15 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Network adapter
|
||||
*/
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class SharedAdapter {
|
||||
|
||||
public String id;
|
||||
|
|
|
@ -1,6 +1,9 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class SharedProcessorPool {
|
||||
|
||||
public int id;
|
||||
|
|
|
@ -1,9 +1,12 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class Storage {
|
||||
|
||||
public List<String> clientLpars = new ArrayList<>();
|
||||
|
|
|
@ -1,9 +1,10 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import com.fasterxml.jackson.annotation.JsonUnwrapped;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class SystemFirmware {
|
||||
|
||||
@JsonUnwrapped
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
import com.fasterxml.jackson.annotation.JsonUnwrapped;
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class SystemUtil {
|
||||
|
||||
@JsonProperty("utilInfo")
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class Temperature {
|
||||
|
||||
public String entityId = "";
|
||||
|
|
|
@ -1,8 +1,11 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class ThermalUtil {
|
||||
|
||||
public List<Temperature> inletTemperatures = new ArrayList<>();
|
||||
|
|
|
@ -2,7 +2,7 @@ package biz.nellemann.hmci.dto.json;
|
|||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties({ "metricArrayOrder" })
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class UtilInfo {
|
||||
|
||||
public String version = "";
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonAlias;
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class UtilSample {
|
||||
|
||||
public String sampleType = "";
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class ViosMemory {
|
||||
public double assignedMem;
|
||||
public double utilizedMem;
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class ViosUtil {
|
||||
|
||||
public int id;
|
||||
|
|
|
@ -1,10 +1,13 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
/**
|
||||
* Network adapter SEA
|
||||
*/
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class VirtualEthernetAdapter {
|
||||
|
||||
public String physicalLocation = "";
|
||||
|
|
|
@ -1,12 +1,16 @@
|
|||
package biz.nellemann.hmci.dto.json;
|
||||
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
|
||||
/**
|
||||
* Storage adapter - NPIV ?
|
||||
*/
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public final class VirtualFiberChannelAdapter {
|
||||
|
||||
public String id = "";
|
||||
public String wwpn = "";
|
||||
public String wwpn2 = "";
|
||||
public String physicalLocation = "";
|
||||
|
|
|
@ -3,6 +3,10 @@ package biz.nellemann.hmci.dto.toml;
|
|||
public class InfluxConfiguration {
|
||||
|
||||
public String url;
|
||||
public String org;
|
||||
public String token;
|
||||
public String bucket;
|
||||
|
||||
public String username;
|
||||
public String password;
|
||||
public String database;
|
||||
|
|
|
@ -5,7 +5,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
|||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
|
||||
import java.io.Serializable;
|
||||
import java.util.List;
|
||||
|
||||
//@JsonIgnoreProperties({ "author", "etag" })
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
|
|
|
@ -6,9 +6,7 @@ import com.fasterxml.jackson.annotation.JsonProperty;
|
|||
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlElementWrapper;
|
||||
|
||||
import java.io.Serializable;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
|
||||
//@JsonIgnoreProperties({ "link" })
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
|
|
|
@ -52,6 +52,7 @@ class LogicalPartitionTest extends Specification {
|
|||
}
|
||||
|
||||
def cleanupSpec() {
|
||||
serviceClient.logoff()
|
||||
mockServer.stop()
|
||||
}
|
||||
|
||||
|
|
|
@ -42,6 +42,7 @@ class ManagedSystemTest extends Specification {
|
|||
}
|
||||
|
||||
def cleanupSpec() {
|
||||
serviceClient.logoff()
|
||||
mockServer.stop()
|
||||
}
|
||||
|
||||
|
|
Loading…
Reference in New Issue