Merged in jackson (pull request #22)

Jackson
This commit is contained in:
Mark Nellemann 2022-12-01 15:17:47 +00:00
commit 7786f5182f
121 changed files with 17355 additions and 3306 deletions

View file

@ -2,33 +2,29 @@
All notable changes to this project will be documented in this file. All notable changes to this project will be documented in this file.
## [1.3.4] - 2022-10-24 ## [1.4.0] - 2011-12-01
### Changed - Rewrite of toml+xml+json de-serialization code (uses jackson now).
- Updated 3rd party dependencies - Changes to configuration file format - please look at [doc/hmci.toml](doc/hmci.toml) as example.
- Logging (write to file) JSON output from HMC is currently not possible.
## [1.3.3] - 2022-09-20 ## [1.3.3] - 2022-09-20
### Added
- Default configuration location on Windows platform. - Default configuration location on Windows platform.
- Process LPAR SR-IOV logical network ports data - Process LPAR SR-IOV logical network ports data
- Update default dashboards - Update default dashboards
- Update documentation - Update documentation
## [1.3.0] - 2022-02-04 ## [1.3.0] - 2022-02-04
### Changed
- Correct use of InfluxDB batch writing. - Correct use of InfluxDB batch writing.
## [1.2.8] - 2022-02-28 ## [1.2.8] - 2022-02-28
### Changed
- Sort measurement tags before writing to InfluxDB. - Sort measurement tags before writing to InfluxDB.
- Update 3rd party dependencies. - Update 3rd party dependencies.
## [1.2.7] - 2022-02-24 ## [1.2.7] - 2022-02-24
### Added
- Options to include/exclude Managed Systems and/or Logical Partitions. - Options to include/exclude Managed Systems and/or Logical Partitions.
[1.3.4]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.3.4%0Dv1.3.3 [1.4.0]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.4.0%0Dv1.3.3
[1.3.3]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.3.3%0Dv1.3.0 [1.3.3]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.3.3%0Dv1.3.0
[1.3.0]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.3.0%0Dv1.2.8 [1.3.0]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.3.0%0Dv1.2.8
[1.2.8]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.2.8%0Dv1.2.7 [1.2.8]: https://bitbucket.org/mnellemann/hmci/branches/compare/v1.2.8%0Dv1.2.7

View file

@ -2,14 +2,14 @@
**HMCi** is a utility that collects metrics from one or more *IBM Power Hardware Management Consoles (HMC)*, without the need to install agents on logical partitions / virtual machines running on the IBM Power systems. The metric data is processed and saved into an InfluxDB time-series database. Grafana is used to visualize the metrics data from InfluxDB through provided dashboards, or your own customized dashboards. **HMCi** is a utility that collects metrics from one or more *IBM Power Hardware Management Consoles (HMC)*, without the need to install agents on logical partitions / virtual machines running on the IBM Power systems. The metric data is processed and saved into an InfluxDB time-series database. Grafana is used to visualize the metrics data from InfluxDB through provided dashboards, or your own customized dashboards.
This software is free to use and is licensed under the [Apache 2.0 License](https://bitbucket.org/mnellemann/syslogd/src/master/LICENSE), but is not supported or endorsed by International Business Machines (IBM). There is an optional [companion agent](https://bitbucket.org/mnellemann/sysmon/), which provides more metrics from within AIX and Linux. This software is free to use and is licensed under the [Apache 2.0 License](https://bitbucket.org/mnellemann/hmci/src/master/LICENSE), but is not supported or endorsed by International Business Machines (IBM). There is an optional [companion agent](https://bitbucket.org/mnellemann/sysmon/), which provides more metrics from within AIX and Linux.
Metrics includes: Metrics includes:
- *Managed Systems* - the physical Power servers - *Managed Systems* - the physical Power servers
- *Logical Partitions* - the virtualized servers running AIX, Linux or IBM-i (AS/400) - *Logical Partitions* - the virtualized servers running AIX, Linux or IBM-i (AS/400)
- *Virtual I/O Servers* - the i/o partition(s) virtualizing network and storage - *Virtual I/O Servers* - the i/o partition(s) virtualizing network and storage
- *Energy* - power consumption and temperatures (needs to be enabled and is not available on P7 and multi-chassis systems) - *Energy* - watts and temperatures (needs to be enabled and is not available on P7 and multi-chassis systems)
![architecture](doc/HMCi.png) ![architecture](doc/HMCi.png)
@ -33,8 +33,11 @@ There are few steps in the installation.
- Navigate to *Users and Security* - Navigate to *Users and Security*
- Create a new read-only/viewer **hmci** user, which will be used to connect to the HMC. - Create a new read-only/viewer **hmci** user, which will be used to connect to the HMC.
- Click *Manage User Profiles and Access*, edit the newly created *hmci* user and click *User Properties*: - Click *Manage User Profiles and Access*, edit the newly created *hmci* user and click *User Properties*:
- **Enable** *Allow remote access via the web* - Set *Session timeout minutes* to **60**
- Set *Verify timeout minutes* to **15**
- Set *Idle timeout minutes* to **90**
- Set *Minimum time in days between password changes* to **0** - Set *Minimum time in days between password changes* to **0**
- **Enable** *Allow remote access via the web*
- Navigate to *HMC Management* and *Console Settings* - Navigate to *HMC Management* and *Console Settings*
- Click *Change Performance Monitoring Settings*: - Click *Change Performance Monitoring Settings*:
- Enable *Performance Monitoring Data Collection for Managed Servers*: **All On** - Enable *Performance Monitoring Data Collection for Managed Servers*: **All On**
@ -63,17 +66,17 @@ Install *HMCi* on a host, that can connect to your Power HMC (on port 12443), an
- Ensure you have **correct date/time** and NTPd running to keep it accurate! - Ensure you have **correct date/time** and NTPd running to keep it accurate!
- The only requirement for **hmci** is the Java runtime, version 8 (or later) - The only requirement for **hmci** is the Java runtime, version 8 (or later)
- Install **HMCi** from [downloads](https://bitbucket.org/mnellemann/hmci/downloads/) (rpm, deb or jar) or build from source - Install **HMCi** from [downloads](https://bitbucket.org/mnellemann/hmci/downloads/) (rpm, deb or jar) or build from source
- On RPM based systems: **sudo rpm -i hmci-x.y.z-n.noarch.rpm** - On RPM based systems: ```sudo rpm -ivh hmci-x.y.z-n.noarch.rpm```
- On DEB based systems: **sudo dpkg -i hmci_x.y.z-n_all.deb** - On DEB based systems: ```sudo dpkg -i hmci_x.y.z-n_all.deb```
- Copy the **/opt/hmci/doc/hmci.toml** configuration example into **/etc/hmci.toml** and edit the configuration to suit your environment. The location of the configuration file can optionally be changed with the *--conf* option. - Copy the **/opt/hmci/doc/hmci.toml** configuration example into **/etc/hmci.toml** and edit the configuration to suit your environment. The location of the configuration file can optionally be changed with the *--conf* option.
- Run the **/opt/hmci/bin/hmci** program in a shell, as a @reboot cron task or configure as a proper service - there are instructions in the [doc/readme-service.md](doc/readme-service.md) file. - Run the **/opt/hmci/bin/hmci** program in a shell, as a @reboot cron task or configure as a proper service - there are instructions in the [doc/readme-service.md](doc/readme-service.md) file.
- When started, *hmci* expects the InfluxDB database to be created by you. - When started, *hmci* expects the InfluxDB database to exist already.
### 4 - Grafana Configuration ### 4 - Grafana Configuration
- Configure Grafana to use InfluxDB as a new datasource - Configure Grafana to use InfluxDB as a new datasource
- **NOTE:** set *Min time interval* to *30s* or *1m* depending on your HMCi *refresh* setting. - **NOTE:** set *Min time interval* to *30s* or *1m* depending on your HMCi *refresh* setting.
- Import example dashboards from [doc/dashboards/*.json](doc/dashboards/) into Grafana as a starting point and get creative making your own cool dashboards :) - Import example dashboards from [doc/dashboards/*.json](doc/dashboards/) into Grafana as a starting point and get creative making your own cool dashboards - please share anything useful :)
## Notes ## Notes
@ -188,30 +191,30 @@ Use the gradle build tool, which will download all required dependencies:
### Local Testing ### Local Testing
#### InfluxDB container #### InfluxDB
Start the InfluxDB container: Start the InfluxDB container:
```shell ```shell
docker run --name=influxdb --rm -d -p 8086:8086 influxdb:1.8-alpine docker run --name=influxdb --rm -d -p 8086:8086 influxdb:1.8
``` ```
To execute the Influx client from within the container: Create the *hmci* database:
```shell ```shell
docker exec -it influxdb influx docker exec -i influxdb influx -execute "CREATE DATABASE hmci"
``` ```
#### Grafana container
#### Grafana
Start the Grafana container, linking it to the InfluxDB container: Start the Grafana container, linking it to the InfluxDB container:
```shell ```shell
docker run --name grafana --link influxdb:influxdb --rm -d -p 3000:3000 grafana/grafana:7.1.3 docker run --name grafana --link influxdb:influxdb --rm -d -p 3000:3000 grafana/grafana
``` ```
Setup Grafana to connect to the InfluxDB container by defining a new datasource on URL *http://influxdb:8086* named *hmci*. Setup Grafana to connect to the InfluxDB container by defining a new datasource on URL *http://influxdb:8086* named *hmci*.
The hmci database must be created beforehand, which can be done by running the hmci tool first.
Grafana dashboards can be imported from the *doc/* folder. Grafana dashboards can be imported from the *doc/* folder.

View file

@ -1,4 +1,4 @@
image: openjdk:8 image: eclipse-temurin:8-jdk
pipelines: pipelines:
branches: branches:

View file

@ -20,25 +20,26 @@ group = projectGroup
version = projectVersion version = projectVersion
dependencies { dependencies {
annotationProcessor 'info.picocli:picocli-codegen:4.6.3' annotationProcessor 'info.picocli:picocli-codegen:4.7.0'
implementation 'info.picocli:picocli:4.6.3' implementation 'info.picocli:picocli:4.7.0'
implementation 'org.jsoup:jsoup:1.15.3'
implementation 'com.squareup.okhttp3:okhttp:4.10.0'
implementation 'com.squareup.moshi:moshi:1.14.0'
implementation 'com.serjltt.moshi:moshi-lazy-adapters:2.2'
implementation 'org.tomlj:tomlj:1.1.0'
implementation 'org.influxdb:influxdb-java:2.23' implementation 'org.influxdb:influxdb-java:2.23'
implementation 'org.slf4j:slf4j-api:2.0.3' //implementation 'com.influxdb:influxdb-client-java:6.7.0'
implementation 'org.slf4j:slf4j-simple:2.0.3' implementation 'org.slf4j:slf4j-api:2.0.4'
implementation 'org.slf4j:slf4j-simple:2.0.4'
implementation 'com.squareup.okhttp3:okhttp:4.10.0' // Also used by InfluxDB Client
//implementation "org.eclipse.jetty:jetty-client:9.4.49.v20220914"
implementation 'com.fasterxml.jackson.core:jackson-databind:2.14.1'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.14.1'
implementation 'com.fasterxml.jackson.dataformat:jackson-dataformat-toml:2.14.1'
testImplementation 'junit:junit:4.13.2'
testImplementation 'org.spockframework:spock-core:2.3-groovy-3.0' testImplementation 'org.spockframework:spock-core:2.3-groovy-3.0'
testImplementation 'com.squareup.okhttp3:mockwebserver:4.10.0' testImplementation "org.mock-server:mockserver-netty-no-dependencies:5.14.0"
testImplementation 'org.slf4j:slf4j-simple:2.0.3'
} }
application { application {
mainClass.set('biz.nellemann.hmci.Application') mainClass.set('biz.nellemann.hmci.Application')
applicationDefaultJvmArgs = [ "-server", "-Xms64m", "-Xmx64m", "-XX:+UseG1GC" ] applicationDefaultJvmArgs = [ "-server", "-Xms64m", "-Xmx64m", "-XX:+UseG1GC", "-XX:+ExitOnOutOfMemoryError", "-XX:+AlwaysPreTouch" ]
} }
java { java {
@ -50,6 +51,7 @@ test {
useJUnitPlatform() useJUnitPlatform()
} }
apply plugin: 'nebula.ospackage' apply plugin: 'nebula.ospackage'
ospackage { ospackage {
packageName = 'hmci' packageName = 'hmci'
@ -104,7 +106,7 @@ jacocoTestCoverageVerification {
violationRules { violationRules {
rule { rule {
limit { limit {
minimum = 0.5 minimum = 0.4
} }
} }
} }

View file

@ -0,0 +1,719 @@
{
"__inputs": [
{
"name": "DS_INFLUXDB",
"label": "InfluxDB",
"description": "",
"type": "datasource",
"pluginId": "influxdb",
"pluginName": "InfluxDB"
}
],
"__elements": [],
"__requires": [
{
"type": "panel",
"id": "bargauge",
"name": "Bar gauge",
"version": ""
},
{
"type": "panel",
"id": "gauge",
"name": "Gauge",
"version": ""
},
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "8.3.5"
},
{
"type": "panel",
"id": "heatmap",
"name": "Heatmap",
"version": ""
},
{
"type": "datasource",
"id": "influxdb",
"name": "InfluxDB",
"version": "1.0.0"
},
{
"type": "panel",
"id": "stat",
"name": "Stat",
"version": ""
},
{
"type": "panel",
"id": "text",
"name": "Text",
"version": ""
}
],
"annotations": {
"enable": false,
"list": [
{
"builtIn": 1,
"datasource": {
"type": "datasource",
"uid": "grafana"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"description": "https://bitbucket.org/mnellemann/hmci/",
"editable": true,
"fiscalYearStartMonth": 0,
"gnetId": 1465,
"graphTooltip": 0,
"id": null,
"iteration": 1669798059148,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"gridPos": {
"h": 3,
"w": 24,
"x": 0,
"y": 0
},
"id": 33,
"options": {
"content": "## Metrics collected from IBM Power HMC\n \nFor more information: [bitbucket.org/mnellemann/hmci](https://bitbucket.org/mnellemann/hmci)\n ",
"mode": "markdown"
},
"pluginVersion": "8.3.5",
"targets": [
{
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"refId": "A"
}
],
"transparent": true,
"type": "text"
},
{
"cards": {},
"color": {
"cardColor": "#b4ff00",
"colorScale": "sqrt",
"colorScheme": "interpolateOranges",
"exponent": 0.5,
"mode": "opacity"
},
"dataFormat": "timeseries",
"description": "",
"gridPos": {
"h": 11,
"w": 24,
"x": 0,
"y": 3
},
"heatmap": {},
"hideZeroBuckets": true,
"highlightCards": true,
"id": 30,
"legend": {
"show": false
},
"pluginVersion": "8.3.5",
"reverseYBuckets": false,
"targets": [
{
"alias": "$tag_servername",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"poolname"
],
"type": "tag"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "server_sharedProcessorPool",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"configurableProcUnits\") AS \"Utilization\" FROM \"server_processor\" WHERE $timeFilter GROUP BY time($__interval), \"servername\" fill(none)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"utilizedProcUnits"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "servername",
"operator": "=~",
"value": "/^$ServerName$/"
}
]
}
],
"title": "Processors - Utilized / Configurable",
"tooltip": {
"show": true,
"showHistogram": false
},
"transparent": true,
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "percentunit",
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
"yBucketBound": "auto"
},
{
"description": "",
"fieldConfig": {
"defaults": {
"decimals": 2,
"mappings": [],
"max": 1,
"min": 0,
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "orange",
"value": 70
},
{
"color": "red",
"value": 85
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 12,
"x": 0,
"y": 14
},
"id": 36,
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showThresholdLabels": false,
"showThresholdMarkers": true
},
"pluginVersion": "8.3.5",
"targets": [
{
"alias": "$tag_servername",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"poolname"
],
"type": "tag"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "server_sharedProcessorPool",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"configurableProcUnits\") AS \"Utilization\" FROM \"server_processor\" WHERE $timeFilter GROUP BY time($__interval), \"servername\" fill(none)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"utilizedProcUnits"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "servername",
"operator": "=~",
"value": "/^$ServerName$/"
}
]
}
],
"title": "Processors - Utilized / Configurable",
"type": "gauge"
},
{
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "continuous-BlYlRd"
},
"decimals": 1,
"mappings": [],
"max": 1,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 12,
"x": 12,
"y": 14
},
"id": 37,
"options": {
"displayMode": "lcd",
"orientation": "horizontal",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showUnfilled": true
},
"pluginVersion": "8.3.5",
"targets": [
{
"alias": "$tag_servername",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"poolname"
],
"type": "tag"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "server_sharedProcessorPool",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"utilizedProcUnits\") / mean(\"configurableProcUnits\") AS \"Utilization\" FROM \"server_processor\" WHERE $timeFilter GROUP BY time($__interval), \"servername\" fill(none)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"utilizedProcUnits"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "servername",
"operator": "=~",
"value": "/^$ServerName$/"
}
]
}
],
"title": "Processors - Utilized / Configurable",
"type": "bargauge"
},
{
"description": "Configurable processors are activated and available for use and assignment. The difference up to the total is \"dark cores\" which can be activated by code or used with PEP-2.0.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "continuous-BlPu"
},
"mappings": [],
"max": 1,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 12,
"x": 0,
"y": 25
},
"id": 35,
"options": {
"displayMode": "lcd",
"orientation": "horizontal",
"reduceOptions": {
"calcs": [],
"fields": "",
"values": false
},
"showUnfilled": true
},
"pluginVersion": "8.3.5",
"targets": [
{
"alias": "$tag_servername",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"poolname"
],
"type": "tag"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "server_sharedProcessorPool",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"configurableProcUnits\") / mean(\"totalProcUnits\") AS \"Utilization\" FROM \"server_processor\" WHERE $timeFilter GROUP BY time($__interval), \"servername\" fill(none)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"utilizedProcUnits"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "servername",
"operator": "=~",
"value": "/^$ServerName$/"
}
]
}
],
"title": "Processors - Configurable / Total",
"type": "bargauge"
},
{
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "#EAB839",
"value": 85
},
{
"color": "red",
"value": 95
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 12,
"x": 12,
"y": 25
},
"id": 2,
"links": [],
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"text": {
"titleSize": 16
},
"textMode": "value_and_name"
},
"pluginVersion": "8.3.5",
"targets": [
{
"alias": "$tag_servername",
"datasource": {
"type": "influxdb",
"uid": "${DS_INFLUXDB}"
},
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$interval"
],
"type": "time"
},
{
"params": [
"servername"
],
"type": "tag"
},
{
"params": [
"none"
],
"type": "fill"
}
],
"hide": false,
"measurement": "server_memory",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"assignedMemToLpars\") / mean(\"totalMem\") AS \"Utilization\" FROM \"server_memory\" WHERE $timeFilter GROUP BY time($__interval), \"servername\" fill(none)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"assignedMemToLpars"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"assigned"
],
"type": "alias"
}
],
[
{
"params": [
"availableMem"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"available"
],
"type": "alias"
}
]
],
"tags": []
}
],
"title": "Memory Utilization - Assigned / Total",
"type": "stat"
}
],
"refresh": "30s",
"schemaVersion": 34,
"style": "dark",
"tags": [
"Power"
],
"templating": {
"list": []
},
"time": {
"from": "now-7d",
"now": false,
"to": "now-30s"
},
"timepicker": {
"nowDelay": "30s",
"refresh_intervals": [
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "browser",
"title": "HMCi - Power System Utilization",
"uid": "MZ7Q-4K4k",
"version": 3,
"weekStart": ""
}

View file

@ -1,16 +1,6 @@
# HMCi Configuration # HMCi Configuration
# Copy this file into /etc/hmci.toml and customize it to your environment. # Copy this file into /etc/hmci.toml and customize it to your environment.
###
### General HMCi Settings
###
# How often to query HMC's for data - in seconds
hmci.update = 30
# Rescan HMC's for new systems and partitions - every x update
hmci.rescan = 120
### ###
### Define one InfluxDB to save metrics into ### Define one InfluxDB to save metrics into
@ -23,6 +13,7 @@ password = ""
database = "hmci" database = "hmci"
### ###
### Define one or more HMC's to query for metrics ### Define one or more HMC's to query for metrics
### Each entry must be named [hmc.<something-unique>] ### Each entry must be named [hmc.<something-unique>]
@ -31,18 +22,20 @@ database = "hmci"
# HMC to query for data and metrics # HMC to query for data and metrics
[hmc.site1] [hmc.site1]
url = "https://10.10.10.10:12443" url = "https://10.10.10.5:12443"
username = "hmci" username = "hmci"
password = "hmcihmci" password = "hmcihmci"
unsafe = true # Ignore SSL cert. errors refresh = 30 # How often to query HMC for data - in seconds
discover = 120 # Rescan HMC for new systems and partitions - in minutes
trust = true # Ignore SSL cert. errors (due to default self-signed cert. on HMC)
energy = true # Collect energy metrics on supported systems
# Another HMC example # Another HMC example
#[hmc.site2] #[hmc.site2]
#url = "https://10.10.10.30:12443" #url = "https://10.10.20.5:12443"
#username = "user" #username = "user"
#password = "password" #password = "password"
#unsafe = false # When false, validate SSL/TLS cerfificate, default is true
#energy = false # When false, do not collect energy metrics, default is true
#trace = "/tmp/hmci-trace" # When present, store JSON metrics files from HMC into this folder #trace = "/tmp/hmci-trace" # When present, store JSON metrics files from HMC into this folder
#excludeSystems = [ 'notThisSystem' ] # Collect metrics from all systems except those listed here #excludeSystems = [ 'notThisSystem' ] # Collect metrics from all systems except those listed here
#includeSystems = [ 'onlyThisSystems' ] # Collcet metrics from no systems but those listed here #includeSystems = [ 'onlyThisSystems' ] # Collcet metrics from no systems but those listed here

View file

@ -1,3 +1,3 @@
projectId = hmci projectId = hmci
projectGroup = biz.nellemann.hmci projectGroup = biz.nellemann.hmci
projectVersion = 1.3.4 projectVersion = 1.4.0

View file

@ -1,5 +1,5 @@
distributionBase=GRADLE_USER_HOME distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-7.4-bin.zip distributionUrl=https\://services.gradle.org/distributions/gradle-7.5.1-bin.zip
zipStoreBase=GRADLE_USER_HOME zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists zipStorePath=wrapper/dists

View file

@ -15,12 +15,13 @@
*/ */
package biz.nellemann.hmci; package biz.nellemann.hmci;
import biz.nellemann.hmci.dto.toml.Configuration;
import com.fasterxml.jackson.dataformat.toml.TomlMapper;
import picocli.CommandLine; import picocli.CommandLine;
import picocli.CommandLine.Option; import picocli.CommandLine.Option;
import picocli.CommandLine.Command; import picocli.CommandLine.Command;
import java.io.File; import java.io.File;
import java.io.IOException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.concurrent.Callable; import java.util.concurrent.Callable;
@ -45,9 +46,8 @@ public class Application implements Callable<Integer> {
@Override @Override
public Integer call() throws IOException { public Integer call() {
Configuration configuration;
InfluxClient influxClient; InfluxClient influxClient;
List<Thread> threadList = new ArrayList<>(); List<Thread> threadList = new ArrayList<>();
@ -66,22 +66,31 @@ public class Application implements Callable<Integer> {
} }
try { try {
configuration = new Configuration(configurationFile.toPath()); TomlMapper mapper = new TomlMapper();
influxClient = new InfluxClient(configuration.getInflux()); Configuration configuration = mapper.readerFor(Configuration.class)
.readValue(configurationFile);
influxClient = new InfluxClient(configuration.influx);
influxClient.login(); influxClient.login();
for(Configuration.HmcObject configHmc : configuration.getHmc()) { configuration.hmc.forEach((key, value) -> {
Thread t = new Thread(new HmcInstance(configHmc, influxClient)); try {
t.setName(configHmc.name); ManagementConsole managementConsole = new ManagementConsole(value, influxClient);
Thread t = new Thread(managementConsole);
t.setName(key);
t.start(); t.start();
threadList.add(t); threadList.add(t);
} catch (Exception e) {
System.err.println(e.getMessage());
} }
});
for (Thread thread : threadList) { for (Thread thread : threadList) {
thread.join(); thread.join();
} }
} catch (InterruptedException | RuntimeException e) { influxClient.logoff();
} catch (Exception e) {
System.err.println(e.getMessage()); System.err.println(e.getMessage());
return 1; return 1;
} }

View file

@ -1,268 +0,0 @@
/*
* Copyright 2020 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.hmci;
import org.tomlj.Toml;
import org.tomlj.TomlParseResult;
import org.tomlj.TomlTable;
import java.io.IOException;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
public final class Configuration {
final private Long update;
final private Long rescan;
final private InfluxObject influx;
final private List<HmcObject> hmcList;
Configuration(Path configurationFile) throws IOException {
TomlParseResult result = Toml.parse(configurationFile);
result.errors().forEach(error -> System.err.println(error.toString()));
if(result.contains("hmci.update")) {
update = result.getLong("hmci.update");
} else {
update = 30L;
}
if(result.contains("hmci.rescan")) {
rescan = result.getLong("hmci.rescan");
} else {
rescan = 60L;
}
hmcList = parseConfigurationForHmc(result);
influx = parseConfigurationForInflux(result);
}
private List<HmcObject> parseConfigurationForHmc(TomlParseResult result) {
ArrayList<HmcObject> list = new ArrayList<>();
if(result.contains("hmc") && result.isTable("hmc")) {
TomlTable hmcTable = result.getTable("hmc");
if(hmcTable == null) {
return list;
}
for(String key : hmcTable.keySet()) {
HmcObject c = new HmcObject();
c.name = key;
c.update = update;
c.rescan = rescan;
if(hmcTable.contains(key+".url")) {
c.url = hmcTable.getString(key+".url");
}
if(hmcTable.contains(key+".username")) {
c.username = hmcTable.getString(key+".username");
}
if(hmcTable.contains(key+".password")) {
c.password = hmcTable.getString(key+".password");
}
if(hmcTable.contains(key+".unsafe")) {
c.unsafe = hmcTable.getBoolean(key+".unsafe");
} else {
c.unsafe = false;
}
if(hmcTable.contains(key+".energy")) {
c.energy = hmcTable.getBoolean(key+".energy");
} else {
c.energy = true;
}
if(hmcTable.contains(key+".trace")) {
c.trace = hmcTable.getString(key+".trace");
} else {
c.trace = null;
}
if(hmcTable.contains(key+".excludeSystems")) {
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".excludeSystems").toList();
c.excludeSystems = tmpList.stream()
.map(object -> Objects.toString(object, null))
.collect(Collectors.toList());
} else {
c.excludeSystems = new ArrayList<>();
}
if(hmcTable.contains(key+".includeSystems")) {
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".includeSystems").toList();
c.includeSystems = tmpList.stream()
.map(object -> Objects.toString(object, null))
.collect(Collectors.toList());
} else {
c.includeSystems = new ArrayList<>();
}
if(hmcTable.contains(key+".excludePartitions")) {
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".excludePartitions").toList();
c.excludePartitions = tmpList.stream()
.map(object -> Objects.toString(object, null))
.collect(Collectors.toList());
} else {
c.excludePartitions = new ArrayList<>();
}
if(hmcTable.contains(key+".includePartitions")) {
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".includePartitions").toList();
c.includePartitions = tmpList.stream()
.map(object -> Objects.toString(object, null))
.collect(Collectors.toList());
} else {
c.includePartitions = new ArrayList<>();
}
list.add(c);
}
}
return list;
}
private InfluxObject parseConfigurationForInflux(TomlParseResult result) {
InfluxObject c = new InfluxObject();
if(result.contains("influx")) {
TomlTable t = result.getTable("influx");
if(t != null && t.contains("url")) {
c.url = t.getString("url");
}
if(t != null && t.contains("username")) {
c.username = t.getString("username");
}
if(t != null && t.contains("password")) {
c.password = t.getString("password");
}
if(t != null && t.contains("database")) {
c.database = t.getString("database");
}
}
return c;
}
public List<HmcObject> getHmc() {
return hmcList;
}
public InfluxObject getInflux() {
return influx;
}
static class InfluxObject {
String url = "http://localhost:8086";
String username = "root";
String password = "";
String database = "hmci";
private boolean validated = false;
InfluxObject() { }
InfluxObject(String url, String username, String password, String database) {
this.url = url;
this.username = username;
this.password = password;
this.database = database;
}
Boolean isValid() {
return validated;
}
// TODO: Implement validation
void validate() {
validated = true;
}
@Override
public String toString() {
return url;
}
}
static class HmcObject {
String name;
String url;
String username;
String password;
Boolean unsafe = false;
Boolean energy = true;
String trace;
List<String> excludeSystems;
List<String> includeSystems;
List<String> excludePartitions;
List<String> includePartitions;
Long update = 30L;
Long rescan = 60L;
private boolean validated = false;
HmcObject() { }
HmcObject(String name, String url, String username, String password, Boolean unsafe, Long update, Long rescan) {
this.url = url;
this.username = username;
this.password = password;
this.unsafe = unsafe;
this.update = update;
this.rescan = rescan;
}
Boolean isValid() {
return validated;
}
// TODO: Implement validation
void validate() {
validated = true;
}
@Override
public String toString() {
return name;
}
}
}

View file

@ -1,337 +0,0 @@
/*
* Copyright 2020 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.hmci;
import biz.nellemann.hmci.Configuration.HmcObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.time.Duration;
import java.time.Instant;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.atomic.AtomicBoolean;
import static java.lang.Thread.sleep;
class HmcInstance implements Runnable {
private final static Logger log = LoggerFactory.getLogger(HmcInstance.class);
private final String hmcId;
private final Long updateValue;
private final Long rescanValue;
private final Map<String,ManagedSystem> systems = new HashMap<>();
private final Map<String, LogicalPartition> partitions = new HashMap<>();
private final HmcRestClient hmcRestClient;
private final InfluxClient influxClient;
private final AtomicBoolean keepRunning = new AtomicBoolean(true);
private File traceDir;
private Boolean doTrace = false;
private Boolean doEnergy = true;
private List<String> excludeSystems;
private List<String> includeSystems;
private List<String> excludePartitions;
private List<String> includePartitions;
HmcInstance(HmcObject configHmc, InfluxClient influxClient) {
this.hmcId = configHmc.name;
this.updateValue = configHmc.update;
this.rescanValue = configHmc.rescan;
this.doEnergy = configHmc.energy;
this.influxClient = influxClient;
hmcRestClient = new HmcRestClient(configHmc.url, configHmc.username, configHmc.password, configHmc.unsafe);
log.debug("HmcInstance() - id: {}, update: {}, refresh {}", hmcId, updateValue, rescanValue);
if(configHmc.trace != null) {
try {
traceDir = new File(configHmc.trace);
traceDir.mkdirs();
if(traceDir.canWrite()) {
doTrace = true;
} else {
log.warn("HmcInstance() - can't write to trace dir: " + traceDir.toString());
}
} catch (Exception e) {
log.error("HmcInstance() - trace error: " + e.getMessage());
}
}
this.excludeSystems = configHmc.excludeSystems;
this.includeSystems = configHmc.includeSystems;
this.excludePartitions = configHmc.excludePartitions;
this.includePartitions = configHmc.includePartitions;
}
@Override
public String toString() {
return hmcId;
}
@Override
public void run() {
log.trace("run() - " + hmcId);
int executions = 0;
discover();
do {
Instant instantStart = Instant.now();
try {
if (doEnergy) {
getMetricsForEnergy();
}
getMetricsForSystems();
getMetricsForPartitions();
writeMetricsForSystemEnergy();
writeMetricsForManagedSystems();
writeMetricsForLogicalPartitions();
//influxClient.writeBatchPoints();
// Refresh
if (++executions > rescanValue) {
executions = 0;
discover();
}
} catch (Exception e) {
log.error("run() - fatal error: {}", e.getMessage());
keepRunning.set(false);
throw new RuntimeException(e);
}
Instant instantEnd = Instant.now();
long timeSpend = Duration.between(instantStart, instantEnd).toMillis();
log.trace("run() - duration millis: " + timeSpend);
if(timeSpend < (updateValue * 1000)) {
try {
long sleepTime = (updateValue * 1000) - timeSpend;
log.trace("run() - sleeping millis: " + sleepTime);
if(sleepTime > 0) {
//noinspection BusyWait
sleep(sleepTime);
}
} catch (InterruptedException e) {
log.error("run() - sleep interrupted", e);
}
} else {
log.warn("run() - possible slow response from this HMC");
}
} while (keepRunning.get());
// Logout of HMC
try {
hmcRestClient.logoff();
} catch (IOException e) {
log.warn("run() - error logging out of HMC: " + e.getMessage());
}
}
void discover() {
log.info("discover() - Querying HMC for Managed Systems and Logical Partitions");
Map<String, LogicalPartition> tmpPartitions = new HashMap<>();
try {
hmcRestClient.login();
hmcRestClient.getManagedSystems().forEach((systemId, system) -> {
// Add to list of known systems
if(!systems.containsKey(systemId)) {
// Check excludeSystems and includeSystems
if(!excludeSystems.contains(system.name) && includeSystems.isEmpty()) {
systems.put(systemId, system);
log.info("discover() - ManagedSystem: {}", system);
if (doEnergy) {
hmcRestClient.enableEnergyMonitoring(system);
}
} else if(!includeSystems.isEmpty() && includeSystems.contains(system.name)) {
systems.put(systemId, system);
log.info("discover() - ManagedSystem (include): {}", system);
if (doEnergy) {
hmcRestClient.enableEnergyMonitoring(system);
}
} else {
log.debug("discover() - Skipping ManagedSystem: {}", system);
}
}
// Get partitions for this system
try {
tmpPartitions.putAll(hmcRestClient.getLogicalPartitionsForManagedSystem(system));
if(!tmpPartitions.isEmpty()) {
partitions.clear();
//partitions.putAll(tmpPartitions);
tmpPartitions.forEach((lparKey, lpar) -> {
if(!excludePartitions.contains(lpar.name) && includePartitions.isEmpty()) {
partitions.put(lparKey, lpar);
log.info("discover() - LogicalPartition: {}", lpar);
} else if(!includePartitions.isEmpty() && includePartitions.contains(lpar.name)) {
partitions.put(lparKey, lpar);
log.info("discover() - LogicalPartition (include): {}", lpar);
} else {
log.debug("discover() - Skipping LogicalPartition: {}", lpar);
}
});
}
} catch (Exception e) {
log.warn("discover() - getLogicalPartitions error: {}", e.getMessage());
}
});
} catch(Exception e) {
log.warn("discover() - getManagedSystems error: {}", e.getMessage());
}
}
void getMetricsForSystems() {
systems.forEach((systemId, system) -> {
// Get and process metrics for this system
String tmpJsonString = null;
try {
tmpJsonString = hmcRestClient.getPcmDataForManagedSystem(system);
} catch (Exception e) {
log.warn("getMetricsForSystems() - error: {}", e.getMessage());
}
if(tmpJsonString != null && !tmpJsonString.isEmpty()) {
system.processMetrics(tmpJsonString);
if(doTrace) {
writeTraceFile(systemId, tmpJsonString);
}
}
});
}
void getMetricsForPartitions() {
try {
// Get partitions for this system
partitions.forEach((partitionId, partition) -> {
// Get and process metrics for this partition
String tmpJsonString2 = null;
try {
tmpJsonString2 = hmcRestClient.getPcmDataForLogicalPartition(partition);
} catch (Exception e) {
log.warn("getMetricsForPartitions() - getPcmDataForLogicalPartition error: {}", e.getMessage());
}
if(tmpJsonString2 != null && !tmpJsonString2.isEmpty()) {
partition.processMetrics(tmpJsonString2);
if(doTrace) {
writeTraceFile(partitionId, tmpJsonString2);
}
}
});
} catch(Exception e) {
log.warn("getMetricsForPartitions() - error: {}", e.getMessage());
}
}
void getMetricsForEnergy() {
systems.forEach((systemId, system) -> {
// Get and process metrics for this system
String tmpJsonString = null;
try {
tmpJsonString = hmcRestClient.getPcmDataForEnergy(system.energy);
} catch (Exception e) {
log.warn("getMetricsForEnergy() - error: {}", e.getMessage());
}
if(tmpJsonString != null && !tmpJsonString.isEmpty()) {
system.energy.processMetrics(tmpJsonString);
}
});
}
void writeMetricsForManagedSystems() {
try {
systems.forEach((systemId, system) -> influxClient.writeManagedSystem(system));
} catch (NullPointerException npe) {
log.warn("writeMetricsForManagedSystems() - NPE: {}", npe.getMessage(), npe);
}
}
void writeMetricsForLogicalPartitions() {
try {
partitions.forEach((partitionId, partition) -> influxClient.writeLogicalPartition(partition));
} catch (NullPointerException npe) {
log.warn("writeMetricsForLogicalPartitions() - NPE: {}", npe.getMessage(), npe);
}
}
void writeMetricsForSystemEnergy() {
try {
systems.forEach((systemId, system) -> influxClient.writeSystemEnergy(system.energy));
} catch (NullPointerException npe) {
log.warn("writeMetricsForSystemEnergy() - NPE: {}", npe.getMessage(), npe);
}
}
private void writeTraceFile(String id, String json) {
String fileName = String.format("%s-%s.json", id, Instant.now().toString());
try {
log.debug("Writing trace file: " + fileName);
File traceFile = new File(traceDir, fileName);
BufferedWriter writer = new BufferedWriter(new FileWriter(traceFile));
writer.write(json);
writer.close();
} catch (IOException e) {
log.warn("writeTraceFile() - " + e.getMessage());
}
}
}

View file

@ -1,545 +0,0 @@
/*
* Copyright 2020 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.hmci;
import okhttp3.*;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.nodes.Entities;
import org.jsoup.parser.Parser;
import org.jsoup.select.Elements;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.net.ssl.*;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
public class HmcRestClient {
private final static Logger log = LoggerFactory.getLogger(HmcRestClient.class);
private final MediaType MEDIA_TYPE_IBM_XML_LOGIN = MediaType.parse("application/vnd.ibm.powervm.web+xml; type=LogonRequest");
protected Integer responseErrors = 0;
protected String authToken;
private final OkHttpClient client;
// OkHttpClient timeouts
private final static int CONNECT_TIMEOUT = 30;
private final static int WRITE_TIMEOUT = 30;
private final static int READ_TIMEOUT = 180;
private final String baseUrl;
private final String username;
private final String password;
HmcRestClient(String url, String username, String password, Boolean unsafe) {
this.baseUrl = url;
this.username = username;
this.password = password;
if(unsafe) {
this.client = getUnsafeOkHttpClient();
} else {
this.client = getSafeOkHttpClient();
}
}
@Override
public String toString() {
return baseUrl;
}
/**
* Logon to the HMC and get an authentication token for further requests.
*/
synchronized void login() throws Exception {
log.debug("Connecting to HMC - " + baseUrl);
logoff();
StringBuilder payload = new StringBuilder();
payload.append("<?xml version='1.0' encoding='UTF-8' standalone='yes'?>");
payload.append("<LogonRequest xmlns='http://www.ibm.com/xmlns/systems/power/firmware/web/mc/2012_10/' schemaVersion='V1_0'>");
payload.append("<UserID>").append(username).append("</UserID>");
payload.append("<Password>").append(password).append("</Password>");
payload.append("</LogonRequest>");
try {
URL url = new URL(String.format("%s/rest/api/web/Logon", baseUrl));
Request request = new Request.Builder()
.url(url)
.addHeader("Accept", "application/vnd.ibm.powervm.web+xml; type=LogonResponse")
.addHeader("X-Audit-Memento", "hmci")
.put(RequestBody.create(payload.toString(), MEDIA_TYPE_IBM_XML_LOGIN))
.build();
Response response = client.newCall(request).execute();
String responseBody = Objects.requireNonNull(response.body()).string();
if (!response.isSuccessful()) {
log.warn("login() - Unexpected response: {}", response.code());
throw new IOException("Unexpected code: " + response);
}
Document doc = Jsoup.parse(responseBody);
authToken = doc.select("X-API-Session").text();
log.debug("login() - Auth Token: " + authToken);
} catch (MalformedURLException e) {
log.error("login() - URL Error: {}", e.getMessage());
throw e;
} catch (Exception e) {
log.error("login() - Error: {}", e.getMessage());
throw e;
}
}
/**
* Logoff from the HMC and remove any session
*
*/
synchronized void logoff() throws IOException {
if(authToken == null) {
return;
}
URL absUrl = new URL(String.format("%s/rest/api/web/Logon", baseUrl));
Request request = new Request.Builder()
.url(absUrl)
.addHeader("Content-Type", "application/vnd.ibm.powervm.web+xml; type=LogonRequest")
.addHeader("X-API-Session", authToken)
.delete()
.build();
try {
client.newCall(request).execute();
} catch (IOException e) {
log.warn("logoff() error: {}", e.getMessage());
} finally {
authToken = null;
}
}
/**
* Return Map of ManagedSystems seen by this HMC
*
* @return Map of system-id and ManagedSystem
*/
Map<String, ManagedSystem> getManagedSystems() throws Exception {
URL url = new URL(String.format("%s/rest/api/uom/ManagedSystem", baseUrl));
String responseBody = sendGetRequest(url);
Map<String,ManagedSystem> managedSystemsMap = new HashMap<>();
// Do not try to parse empty response
if(responseBody == null || responseBody.length() <= 1) {
responseErrors++;
return managedSystemsMap;
}
try {
Document doc = Jsoup.parse(responseBody);
Elements managedSystems = doc.select("ManagedSystem|ManagedSystem"); // doc.select("img[src$=.png]");
for(Element el : managedSystems) {
ManagedSystem system = new ManagedSystem(
el.select("Metadata > Atom > AtomID").text(),
el.select("SystemName").text(),
el.select("MachineTypeModelAndSerialNumber > MachineType").text(),
el.select("MachineTypeModelAndSerialNumber > Model").text(),
el.select("MachineTypeModelAndSerialNumber > SerialNumber").text()
);
managedSystemsMap.put(system.id, system);
log.debug("getManagedSystems() - Found system: {}", system);
}
} catch(Exception e) {
log.warn("getManagedSystems() - XML parse error", e);
}
return managedSystemsMap;
}
/**
* Return Map of LogicalPartitions seen by a ManagedSystem on this HMC
* @param system a valid ManagedSystem
* @return Map of partition-id and LogicalPartition
*/
Map<String, LogicalPartition> getLogicalPartitionsForManagedSystem(ManagedSystem system) throws Exception {
URL url = new URL(String.format("%s/rest/api/uom/ManagedSystem/%s/LogicalPartition", baseUrl, system.id));
String responseBody = sendGetRequest(url);
Map<String, LogicalPartition> partitionMap = new HashMap<>();
// Do not try to parse empty response
if(responseBody == null || responseBody.length() <= 1) {
responseErrors++;
return partitionMap;
}
try {
Document doc = Jsoup.parse(responseBody);
Elements logicalPartitions = doc.select("LogicalPartition|LogicalPartition");
for(Element el : logicalPartitions) {
LogicalPartition logicalPartition = new LogicalPartition(
el.select("PartitionUUID").text(),
el.select("PartitionName").text(),
el.select("PartitionType").text(),
system
);
partitionMap.put(logicalPartition.id, logicalPartition);
log.debug("getLogicalPartitionsForManagedSystem() - Found partition: {}", logicalPartition);
}
} catch(Exception e) {
log.warn("getLogicalPartitionsForManagedSystem() - XML parse error: {}", system, e);
}
return partitionMap;
}
/**
* Parse XML feed to get PCM Data in JSON format
* @param system a valid ManagedSystem
* @return JSON string with PCM data for this ManagedSystem
*/
String getPcmDataForManagedSystem(ManagedSystem system) throws Exception {
log.trace("getPcmDataForManagedSystem() - {}", system.id);
URL url = new URL(String.format("%s/rest/api/pcm/ManagedSystem/%s/ProcessedMetrics?NoOfSamples=1", baseUrl, system.id));
String responseBody = sendGetRequest(url);
String jsonBody = null;
// Do not try to parse empty response
if(responseBody == null || responseBody.length() <= 1) {
responseErrors++;
log.warn("getPcmDataForManagedSystem() - empty response, skipping: {}", system.name);
return null;
}
try {
Document doc = Jsoup.parse(responseBody);
Element entry = doc.select("feed > entry").first();
Element link = Objects.requireNonNull(entry).select("link[href]").first();
if(Objects.requireNonNull(link).attr("type").equals("application/json")) {
String href = link.attr("href");
log.trace("getPcmDataForManagedSystem() - URL: {}", href);
jsonBody = sendGetRequest(new URL(href));
}
} catch(Exception e) {
log.warn("getPcmDataForManagedSystem() - XML parse error: {}", system, e);
}
return jsonBody;
}
/**
* Parse XML feed to get PCM Data in JSON format
* @param partition a valid LogicalPartition
* @return JSON string with PCM data for this LogicalPartition
*/
String getPcmDataForLogicalPartition(LogicalPartition partition) throws Exception {
log.trace("getPcmDataForLogicalPartition() - {} @ {}", partition.id, partition.system.id);
URL url = new URL(String.format("%s/rest/api/pcm/ManagedSystem/%s/LogicalPartition/%s/ProcessedMetrics?NoOfSamples=1", baseUrl, partition.system.id, partition.id));
String responseBody = sendGetRequest(url);
String jsonBody = null;
// Do not try to parse empty response
if(responseBody == null || responseBody.length() <= 1) {
responseErrors++;
log.warn("getPcmDataForLogicalPartition() - empty response, skipping: {}", partition.name);
return null;
}
try {
Document doc = Jsoup.parse(responseBody);
Element entry = doc.select("feed > entry").first();
Element link = Objects.requireNonNull(entry).select("link[href]").first();
if(Objects.requireNonNull(link).attr("type").equals("application/json")) {
String href = link.attr("href");
log.trace("getPcmDataForLogicalPartition() - URL: {}", href);
jsonBody = sendGetRequest(new URL(href));
}
} catch(Exception e) {
log.warn("getPcmDataForLogicalPartition() - XML parse error: {}", partition.id, e);
}
return jsonBody;
}
/**
* Parse XML feed to get PCM Data in JSON format.
* Does not work for older HMC (pre v9) and older Power server (pre Power 8).
* @param systemEnergy a valid SystemEnergy
* @return JSON string with PCM data for this SystemEnergy
*/
String getPcmDataForEnergy(SystemEnergy systemEnergy) throws Exception {
log.trace("getPcmDataForEnergy() - " + systemEnergy.system.id);
URL url = new URL(String.format("%s/rest/api/pcm/ManagedSystem/%s/ProcessedMetrics?Type=Energy&NoOfSamples=1", baseUrl, systemEnergy.system.id));
String responseBody = sendGetRequest(url);
String jsonBody = null;
//log.info(responseBody);
// Do not try to parse empty response
if(responseBody == null || responseBody.length() <= 1) {
responseErrors++;
log.trace("getPcmDataForEnergy() - empty response, skipping: {}", systemEnergy);
return null;
}
try {
Document doc = Jsoup.parse(responseBody);
Element entry = doc.select("feed > entry").first();
Element link = Objects.requireNonNull(entry).select("link[href]").first();
if(Objects.requireNonNull(link).attr("type").equals("application/json")) {
String href = link.attr("href");
log.trace("getPcmDataForEnergy() - URL: {}", href);
jsonBody = sendGetRequest(new URL(href));
}
} catch(Exception e) {
log.warn("getPcmDataForEnergy() - XML parse error: {}", systemEnergy, e);
}
return jsonBody;
}
/**
* Set EnergyMonitorEnabled preference to true, if possible.
* @param system
*/
void enableEnergyMonitoring(ManagedSystem system) {
log.trace("enableEnergyMonitoring() - {}", system);
try {
URL url = new URL(String.format("%s/rest/api/pcm/ManagedSystem/%s/preferences", baseUrl, system.id));
String responseBody = sendGetRequest(url);
// Do not try to parse empty response
if(responseBody == null || responseBody.length() <= 1) {
responseErrors++;
log.warn("enableEnergyMonitoring() - empty response, skipping: {}", system);
return;
}
Document doc = Jsoup.parse(responseBody, "", Parser.xmlParser());
doc.outputSettings().escapeMode(Entities.EscapeMode.xhtml);
doc.outputSettings().prettyPrint(false);
doc.outputSettings().charset("US-ASCII");
Element entry = doc.select("feed > entry").first();
Element link1 = Objects.requireNonNull(entry).select("EnergyMonitoringCapable").first();
Element link2 = entry.select("EnergyMonitorEnabled").first();
if(Objects.requireNonNull(link1).text().equals("true")) {
log.debug("enableEnergyMonitoring() - EnergyMonitoringCapable == true");
if(Objects.requireNonNull(link2).text().equals("false")) {
//log.warn("enableEnergyMonitoring() - EnergyMonitorEnabled == false");
link2.text("true");
Document content = Jsoup.parse(Objects.requireNonNull(doc.select("Content").first()).html(), "", Parser.xmlParser());
content.outputSettings().escapeMode(Entities.EscapeMode.xhtml);
content.outputSettings().prettyPrint(false);
content.outputSettings().charset("UTF-8");
String updateXml = content.outerHtml();
sendPostRequest(url, updateXml);
}
} else {
log.warn("enableEnergyMonitoring() - EnergyMonitoringCapable == false");
}
} catch (Exception e) {
log.debug("enableEnergyMonitoring() - Error: {}", e.getMessage());
}
}
/**
* Return a Response from the HMC
* @param url to get Response from
* @return Response body string
*/
private String sendGetRequest(URL url) throws Exception {
log.trace("getResponse() - URL: {}", url.toString());
if(authToken == null) {
return null;
}
Request request = new Request.Builder()
.url(url)
.addHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8")
.addHeader("X-API-Session", authToken)
.get().build();
Response response = client.newCall(request).execute();
String body = Objects.requireNonNull(response.body()).string();
if (!response.isSuccessful()) {
response.close();
if(response.code() == 401) {
log.warn("getResponse() - 401 - login and retry.");
authToken = null;
login();
return null;
}
log.error("getResponse() - Unexpected response: {}", response.code());
throw new IOException("getResponse() - Unexpected response: " + response.code());
}
return body;
}
/**
* Send a POST request with a payload (can be null) to the HMC
* @param url
* @param payload
* @return
* @throws Exception
*/
public String sendPostRequest(URL url, String payload) throws Exception {
log.trace("sendPostRequest() - URL: {}", url.toString());
if(authToken == null) {
return null;
}
RequestBody requestBody;
if(payload != null) {
//log.debug("sendPostRequest() - payload: " + payload);
requestBody = RequestBody.create(payload, MediaType.get("application/xml"));
} else {
requestBody = RequestBody.create("", null);
}
Request request = new Request.Builder()
.url(url)
//.addHeader("Content-Type", "application/xml")
.addHeader("content-type", "application/xml")
.addHeader("X-API-Session", authToken)
.post(requestBody).build();
Response response = client.newCall(request).execute();
String body = Objects.requireNonNull(response.body()).string();
if (!response.isSuccessful()) {
response.close();
log.warn(body);
log.error("sendPostRequest() - Unexpected response: {}", response.code());
throw new IOException("sendPostRequest() - Unexpected response: " + response.code());
}
return body;
}
/**
* Provide an unsafe (ignoring SSL problems) OkHttpClient
*
* @return OkHttpClient ignoring SSL/TLS errors
*/
private static OkHttpClient getUnsafeOkHttpClient() {
try {
// Create a trust manager that does not validate certificate chains
final TrustManager[] trustAllCerts = new TrustManager[] {
new X509TrustManager() {
@Override
public void checkClientTrusted(X509Certificate[] chain, String authType) { }
@Override
public void checkServerTrusted(X509Certificate[] chain, String authType) {
}
@Override
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[]{};
}
}
};
// Install the all-trusting trust manager
final SSLContext sslContext = SSLContext.getInstance("SSL");
sslContext.init(null, trustAllCerts, new SecureRandom());
// Create a ssl socket factory with our all-trusting manager
final SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory();
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.sslSocketFactory(sslSocketFactory, (X509TrustManager)trustAllCerts[0]);
builder.hostnameVerifier((hostname, session) -> true);
builder.connectTimeout(CONNECT_TIMEOUT, TimeUnit.SECONDS);
builder.writeTimeout(WRITE_TIMEOUT, TimeUnit.SECONDS);
builder.readTimeout(READ_TIMEOUT, TimeUnit.SECONDS);
return builder.build();
} catch (KeyManagementException | NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
}
/**
* Get OkHttpClient with our preferred timeout values.
* @return OkHttpClient
*/
private static OkHttpClient getSafeOkHttpClient() {
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.connectTimeout(CONNECT_TIMEOUT, TimeUnit.SECONDS);
builder.writeTimeout(WRITE_TIMEOUT, TimeUnit.SECONDS);
builder.readTimeout(READ_TIMEOUT, TimeUnit.SECONDS);
return builder.build();
}
}

View file

@ -15,7 +15,7 @@
*/ */
package biz.nellemann.hmci; package biz.nellemann.hmci;
import biz.nellemann.hmci.Configuration.InfluxObject; import biz.nellemann.hmci.dto.toml.InfluxConfiguration;
import org.influxdb.BatchOptions; import org.influxdb.BatchOptions;
import org.influxdb.InfluxDB; import org.influxdb.InfluxDB;
import org.influxdb.InfluxDBFactory; import org.influxdb.InfluxDBFactory;
@ -26,8 +26,6 @@ import org.slf4j.LoggerFactory;
import java.time.Instant; import java.time.Instant;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.Map;
import java.util.TreeMap;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import static java.lang.Thread.sleep; import static java.lang.Thread.sleep;
@ -43,8 +41,7 @@ public final class InfluxClient {
private InfluxDB influxDB; private InfluxDB influxDB;
InfluxClient(InfluxConfiguration config) {
InfluxClient(InfluxObject config) {
this.url = config.url; this.url = config.url;
this.username = config.username; this.username = config.username;
this.password = config.password; this.password = config.password;
@ -74,8 +71,7 @@ public final class InfluxClient {
thread.setDaemon(true); thread.setDaemon(true);
return thread; return thread;
}) })
); // (4) );
Runtime.getRuntime().addShutdownHook(new Thread(influxDB::close)); Runtime.getRuntime().addShutdownHook(new Thread(influxDB::close));
connected = true; connected = true;
@ -100,314 +96,27 @@ public final class InfluxClient {
influxDB = null; influxDB = null;
} }
/*
synchronized void writeBatchPoints() throws Exception {
log.trace("writeBatchPoints()");
try {
influxDB.write(batchPoints);
batchPoints = BatchPoints.database(database).precision(TimeUnit.SECONDS).build();
errorCounter = 0;
} catch (InfluxDBException.DatabaseNotFoundException e) {
log.error("writeBatchPoints() - database \"{}\" not found/created: can't write data", database);
if (++errorCounter > 3) {
throw new RuntimeException(e);
}
} catch (org.influxdb.InfluxDBIOException e) {
log.warn("writeBatchPoints() - io exception: {}", e.getMessage());
if(++errorCounter < 3) {
log.warn("writeBatchPoints() - reconnecting to InfluxDB due to io exception.");
logoff();
login();
writeBatchPoints();
} else {
throw new RuntimeException(e);
}
} catch(Exception e) {
log.warn("writeBatchPoints() - general exception: {}", e.getMessage());
if(++errorCounter < 3) {
log.warn("writeBatchPoints() - reconnecting to InfluxDB due to general exception.");
logoff();
login();
writeBatchPoints();
} else {
throw new RuntimeException(e);
}
}
}
*/
/*
Managed System
*/
void writeManagedSystem(ManagedSystem system) {
if(system.metrics == null) {
log.trace("writeManagedSystem() - null metrics, skipping: {}", system.name);
return;
}
Instant timestamp = system.getTimestamp();
if(timestamp == null) {
log.warn("writeManagedSystem() - no timestamp, skipping: {}", system.name);
return;
}
//getSystemDetails(system, timestamp).forEach( it -> batchPoints.point(it) );
getSystemDetails(system, timestamp).forEach( it -> influxDB.write(it));
//getSystemProcessor(system, timestamp).forEach( it -> batchPoints.point(it) );
getSystemProcessor(system, timestamp).forEach( it -> influxDB.write(it) );
//getSystemPhysicalProcessorPool(system, timestamp).forEach( it -> batchPoints.point(it) );
getSystemPhysicalProcessorPool(system, timestamp).forEach( it -> influxDB.write(it) );
//getSystemSharedProcessorPools(system, timestamp).forEach( it -> batchPoints.point(it) );
getSystemSharedProcessorPools(system, timestamp).forEach( it -> influxDB.write(it) );
//getSystemMemory(system, timestamp).forEach( it -> batchPoints.point(it) );
getSystemMemory(system, timestamp).forEach( it -> influxDB.write(it) );
//getSystemViosDetails(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosDetails(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosProcessor(system, timestamp).forEach( it -> batchPoints.point(it) );
getSystemViosProcessor(system, timestamp).forEach( it -> influxDB.write(it) );
//getSystemViosMemory(system, timestamp).forEach( it -> batchPoints.point(it) );
getSystemViosMemory(system, timestamp).forEach( it -> influxDB.write(it) );
//getSystemViosNetworkLpars(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosNetworkLpars(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosNetworkGenericAdapters(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosNetworkGenericAdapters(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosNetworkSharedAdapters(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosNetworkSharedAdapters(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosNetworkVirtualAdapters(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosNetworkVirtualAdapters(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosStorageLpars(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosStorageLpars(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosFiberChannelAdapters(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosFiberChannelAdapters(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosStoragePhysicalAdapters(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosStoragePhysicalAdapters(system, timestamp).forEach(it -> influxDB.write(it) );
//getSystemViosStorageVirtualAdapters(system, timestamp).forEach(it -> batchPoints.point(it) );
getSystemViosStorageVirtualAdapters(system, timestamp).forEach(it -> influxDB.write(it) );
public void write(List<Measurement> measurements, Instant timestamp, String measurement) {
log.debug("write() - measurement: {} {}", measurement, measurements.size());
processMeasurementMap(measurements, timestamp, measurement).forEach( (point) -> { influxDB.write(point); });
} }
// TODO: server_details private List<Point> processMeasurementMap(List<Measurement> measurements, Instant timestamp, String measurement) {
private static List<Point> getSystemDetails(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getDetails();
return processMeasurementMap(metrics, timestamp, "server_details");
}
private static List<Point> getSystemProcessor(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getProcessorMetrics();
return processMeasurementMap(metrics, timestamp, "server_processor");
}
private static List<Point> getSystemPhysicalProcessorPool (ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getPhysicalProcessorPool();
return processMeasurementMap(metrics, timestamp, "server_physicalProcessorPool");
}
private static List<Point> getSystemSharedProcessorPools(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getSharedProcessorPools();
return processMeasurementMap(metrics, timestamp, "server_sharedProcessorPool");
}
private static List<Point> getSystemMemory(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getMemoryMetrics();
return processMeasurementMap(metrics, timestamp, "server_memory");
}
private static List<Point> getSystemViosDetails(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosDetails();
return processMeasurementMap(metrics, timestamp, "vios_details");
}
private static List<Point> getSystemViosProcessor(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosProcessorMetrics();
return processMeasurementMap(metrics, timestamp, "vios_processor");
}
private static List<Point> getSystemViosMemory(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosMemoryMetrics();
return processMeasurementMap(metrics, timestamp, "vios_memory");
}
private static List<Point> getSystemViosNetworkLpars(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosNetworkLpars();
return processMeasurementMap(metrics, timestamp, "vios_network_lpars");
}
private static List<Point> getSystemViosNetworkVirtualAdapters(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosNetworkVirtualAdapters();
return processMeasurementMap(metrics, timestamp, "vios_network_virtual");
}
private static List<Point> getSystemViosNetworkSharedAdapters(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosNetworkSharedAdapters();
return processMeasurementMap(metrics, timestamp, "vios_network_shared");
}
private static List<Point> getSystemViosNetworkGenericAdapters(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosNetworkGenericAdapters();
return processMeasurementMap(metrics, timestamp, "vios_network_generic");
}
private static List<Point> getSystemViosStorageLpars(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosStorageLpars();
return processMeasurementMap(metrics, timestamp, "vios_storage_lpars");
}
private static List<Point> getSystemViosFiberChannelAdapters(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosStorageFiberChannelAdapters();
return processMeasurementMap(metrics, timestamp, "vios_storage_FC");
}
private static List<Point> getSystemViosSharedStoragePools(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosStorageSharedStoragePools();
return processMeasurementMap(metrics, timestamp, "vios_storage_SSP");
}
private static List<Point> getSystemViosStoragePhysicalAdapters(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosStoragePhysicalAdapters();
return processMeasurementMap(metrics, timestamp, "vios_storage_physical");
}
private static List<Point> getSystemViosStorageVirtualAdapters(ManagedSystem system, Instant timestamp) {
List<Measurement> metrics = system.getViosStorageVirtualAdapters();
return processMeasurementMap(metrics, timestamp, "vios_storage_vFC");
}
/*
Logical Partitions
*/
void writeLogicalPartition(LogicalPartition partition) {
if(partition.metrics == null) {
log.warn("writeLogicalPartition() - null metrics, skipping: {}", partition.name);
return;
}
Instant timestamp = partition.getTimestamp();
if(timestamp == null) {
log.warn("writeLogicalPartition() - no timestamp, skipping: {}", partition.name);
return;
}
//getPartitionDetails(partition, timestamp).forEach( it -> batchPoints.point(it));
getPartitionDetails(partition, timestamp).forEach( it -> influxDB.write(it));
//getPartitionMemory(partition, timestamp).forEach( it -> batchPoints.point(it));
getPartitionMemory(partition, timestamp).forEach( it -> influxDB.write(it));
//getPartitionProcessor(partition, timestamp).forEach( it -> batchPoints.point(it));
getPartitionProcessor(partition, timestamp).forEach( it -> influxDB.write(it));
//getPartitionNetworkVirtual(partition, timestamp).forEach(it -> batchPoints.point(it));
getPartitionNetworkVirtual(partition, timestamp).forEach(it -> influxDB.write(it));
getPartitionSriovLogicalPorts(partition, timestamp).forEach(it -> influxDB.write(it));
//getPartitionStorageVirtualGeneric(partition, timestamp).forEach(it -> batchPoints.point(it));
getPartitionStorageVirtualGeneric(partition, timestamp).forEach(it -> influxDB.write(it));
//getPartitionStorageVirtualFibreChannel(partition, timestamp).forEach(it -> batchPoints.point(it));
getPartitionStorageVirtualFibreChannel(partition, timestamp).forEach(it -> influxDB.write(it));
}
private static List<Point> getPartitionDetails(LogicalPartition partition, Instant timestamp) {
List<Measurement> metrics = partition.getDetails();
return processMeasurementMap(metrics, timestamp, "lpar_details");
}
private static List<Point> getPartitionProcessor(LogicalPartition partition, Instant timestamp) {
List<Measurement> metrics = partition.getProcessorMetrics();
return processMeasurementMap(metrics, timestamp, "lpar_processor");
}
private static List<Point> getPartitionMemory(LogicalPartition partition, Instant timestamp) {
List<Measurement> metrics = partition.getMemoryMetrics();
return processMeasurementMap(metrics, timestamp, "lpar_memory");
}
private static List<Point> getPartitionNetworkVirtual(LogicalPartition partition, Instant timestamp) {
List<Measurement> metrics = partition.getVirtualEthernetAdapterMetrics();
return processMeasurementMap(metrics, timestamp, "lpar_net_virtual"); // Not 'network'
}
private static List<Point> getPartitionSriovLogicalPorts(LogicalPartition partition, Instant timestamp) {
List<Measurement> metrics = partition.getSriovLogicalPorts();
return processMeasurementMap(metrics, timestamp, "lpar_net_sriov"); // Not 'network'
}
// TODO: lpar_net_sriov
private static List<Point> getPartitionStorageVirtualGeneric(LogicalPartition partition, Instant timestamp) {
List<Measurement> metrics = partition.getVirtualGenericAdapterMetrics();
return processMeasurementMap(metrics, timestamp, "lpar_storage_virtual");
}
private static List<Point> getPartitionStorageVirtualFibreChannel(LogicalPartition partition, Instant timestamp) {
List<Measurement> metrics = partition.getVirtualFibreChannelAdapterMetrics();
return processMeasurementMap(metrics, timestamp, "lpar_storage_vFC");
}
/*
System Energy
Not supported on older HMC (pre v8) or older Power server (pre Power 8)
*/
void writeSystemEnergy(SystemEnergy systemEnergy) {
if(systemEnergy.metrics == null) {
log.trace("writeSystemEnergy() - null metrics, skipping: {}", systemEnergy.system.name);
return;
}
Instant timestamp = systemEnergy.getTimestamp();
if(timestamp == null) {
log.warn("writeSystemEnergy() - no timestamp, skipping: {}", systemEnergy.system.name);
return;
}
//getSystemEnergyPower(systemEnergy, timestamp).forEach(it -> batchPoints.point(it) );
getSystemEnergyPower(systemEnergy, timestamp).forEach(it -> influxDB.write(it) );
//getSystemEnergyTemperature(systemEnergy, timestamp).forEach(it -> batchPoints.point(it) );
getSystemEnergyTemperature(systemEnergy, timestamp).forEach(it -> influxDB.write(it) );
}
private static List<Point> getSystemEnergyPower(SystemEnergy system, Instant timestamp) {
List<Measurement> metrics = system.getPowerMetrics();
return processMeasurementMap(metrics, timestamp, "server_energy_power");
}
private static List<Point> getSystemEnergyTemperature(SystemEnergy system, Instant timestamp) {
List<Measurement> metrics = system.getThermalMetrics();
return processMeasurementMap(metrics, timestamp, "server_energy_thermal");
}
/*
Shared
*/
private static List<Point> processMeasurementMap(List<Measurement> measurements, Instant timestamp, String measurement) {
List<Point> listOfPoints = new ArrayList<>(); List<Point> listOfPoints = new ArrayList<>();
measurements.forEach( m -> { measurements.forEach( (m) -> {
Point.Builder builder = Point.measurement(measurement) Point.Builder builder = Point.measurement(measurement)
.time(timestamp.getEpochSecond(), TimeUnit.SECONDS); .time(timestamp.toEpochMilli(), TimeUnit.MILLISECONDS)
.tag(m.tags)
.fields(m.fields);
/*
// Iterate fields // Iterate fields
m.fields.forEach((fieldName, fieldValue) -> { m.fields.forEach((fieldName, fieldValue) -> {
log.trace("processMeasurementMap() {} - fieldName: {}, fieldValue: {}", measurement, fieldName, fieldValue); log.info("processMeasurementMap() {} - fieldName: {}, fieldValue: {}", measurement, fieldName, fieldValue);
if(fieldValue instanceof Number) { if(fieldValue instanceof Number) {
Number num = (Number) fieldValue; Number num = (Number) fieldValue;
builder.addField(fieldName, num); builder.addField(fieldName, num);
@ -421,14 +130,18 @@ public final class InfluxClient {
}); });
// Iterate sorted tags // Iterate sorted tags
Map<String, String> sortedTags = new TreeMap<String, String>(m.tags); Map<String, String> sortedTags = new TreeMap<>(m.tags);
sortedTags.forEach((tagName, tagValue) -> { sortedTags.forEach((tagName, tagValue) -> {
log.trace("processMeasurementMap() {} - tagName: {}, tagValue: {}", measurement, tagName, tagValue); log.info("processMeasurementMap() {} - tagName: {}, tagValue: {}", measurement, tagName, tagValue);
builder.tag(tagName, tagValue); builder.tag(tagName, tagValue);
}); });
*/
/*
if(m.fields.size() > 0 && m.tags.size() > 0) {
listOfPoints.add(builderbuilder.build());
}*/
listOfPoints.add(builder.build()); listOfPoints.add(builder.build());
}); });
return listOfPoints; return listOfPoints;

View file

@ -15,57 +15,151 @@
*/ */
package biz.nellemann.hmci; package biz.nellemann.hmci;
import biz.nellemann.hmci.dto.xml.Link;
import biz.nellemann.hmci.dto.xml.LogicalPartitionEntry;
import biz.nellemann.hmci.dto.xml.XmlEntry;
import biz.nellemann.hmci.dto.xml.XmlFeed;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.io.IOException;
import java.util.HashMap; import java.net.URI;
import java.util.List; import java.net.URISyntaxException;
import java.util.Map; import java.util.*;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeoutException;
class LogicalPartition extends MetaSystem { class LogicalPartition extends Resource {
private final static Logger log = LoggerFactory.getLogger(LogicalPartition.class); private final static Logger log = LoggerFactory.getLogger(LogicalPartition.class);
public final String id; private final RestClient restClient;
public final String name; private final ManagedSystem managedSystem;
public final String type;
public final ManagedSystem system; protected String id;
protected String name;
protected LogicalPartitionEntry entry;
private String uriPath;
LogicalPartition(String id, String name, String type, ManagedSystem system) { public LogicalPartition(RestClient restClient, String href, ManagedSystem managedSystem) throws URISyntaxException {
this.id = id; log.debug("LogicalPartition() - {}", href);
this.name = name; this.restClient = restClient;
this.type = type; this.managedSystem = managedSystem;
this.system = system; try {
URI uri = new URI(href);
uriPath = uri.getPath();
} catch (URISyntaxException e) {
log.error("LogicalPartition() - {}", e.getMessage());
}
} }
@Override @Override
public String toString() { public String toString() {
return String.format("[%s] %s (%s)", id, name, type); return entry.getName();
} }
public void discover() {
try {
String xml = restClient.getRequest(uriPath);
// Do not try to parse empty response
if(xml == null || xml.length() <= 1) {
log.warn("discover() - no data.");
return;
}
XmlMapper xmlMapper = new XmlMapper();
XmlEntry xmlEntry = xmlMapper.readValue(xml, XmlEntry.class);
if(xmlEntry.getContent() == null){
log.warn("discover() - no content.");
return;
}
this.id = xmlEntry.id;
if(xmlEntry.getContent().isLogicalPartition()) {
entry = xmlEntry.getContent().getLogicalPartitionEntry();
this.name = entry.getName();
log.info("discover() - [{}] {} ({})", String.format("%2d", entry.partitionId), entry.getName(), entry.operatingSystemType);
} else {
throw new UnsupportedOperationException("Failed to deserialize LogicalPartition");
}
} catch (Exception e) {
log.error("discover() - error: {}", e.getMessage());
}
}
public void refresh() {
log.debug("refresh()");
try {
String xml = restClient.getRequest(String.format("/rest/api/pcm/ManagedSystem/%s/LogicalPartition/%s/ProcessedMetrics?NoOfSamples=1", managedSystem.id, id));
// Do not try to parse empty response
if(xml == null || xml.length() <= 1) {
log.warn("refresh() - no data.");
return;
}
XmlMapper xmlMapper = new XmlMapper();
XmlFeed xmlFeed = xmlMapper.readValue(xml, XmlFeed.class);
xmlFeed.entries.forEach((entry) -> {
if(entry.category.term.equals("LogicalPartition")) {
Link link = entry.link;
if (link.getType() != null && Objects.equals(link.getType(), "application/json")) {
try {
URI jsonUri = URI.create(link.getHref());
String json = restClient.getRequest(jsonUri.getPath());
deserialize(json);
} catch (IOException e) {
log.error("refresh() - error 1: {}", e.getMessage());
}
}
}
});
} catch (IOException e) {
log.error("refresh() - error 2: {}", e.getMessage());
}
}
// LPAR Details // LPAR Details
List<Measurement> getDetails() { List<Measurement> getDetails() {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
try {
Map<String, String> tagsMap = new HashMap<>(); Map<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name); TreeMap<String, Object> fieldsMap = new TreeMap<>();
tagsMap.put("lparname", name);
log.trace("getDetails() - tags: {}", tagsMap);
Map<String, Object> fieldsMap = new HashMap<>(); tagsMap.put("servername", managedSystem.entry.getName());
fieldsMap.put("id", metrics.systemUtil.sample.lparsUtil.id); tagsMap.put("lparname", entry.getName());
fieldsMap.put("type", metrics.systemUtil.sample.lparsUtil.type); log.trace("getDetails() - tags: " + tagsMap);
fieldsMap.put("state", metrics.systemUtil.sample.lparsUtil.state);
fieldsMap.put("osType", metrics.systemUtil.sample.lparsUtil.osType); fieldsMap.put("id", metric.getSample().lparsUtil.id);
fieldsMap.put("affinityScore", metrics.systemUtil.sample.lparsUtil.affinityScore); fieldsMap.put("type", metric.getSample().lparsUtil.type);
log.trace("getDetails() - fields: {}", fieldsMap); fieldsMap.put("state", metric.getSample().lparsUtil.state);
fieldsMap.put("osType", metric.getSample().lparsUtil.osType);
fieldsMap.put("affinityScore", metric.getSample().lparsUtil.affinityScore);
log.trace("getDetails() - fields: " + fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
} catch (Exception e) {
log.warn("getDetails() - error: {}", e.getMessage());
}
return list; return list;
} }
@ -75,17 +169,22 @@ class LogicalPartition extends MetaSystem {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
try {
Map<String, String> tagsMap = new HashMap<>(); Map<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name); TreeMap<String, Object> fieldsMap = new TreeMap<>();
tagsMap.put("lparname", name);
log.trace("getMemoryMetrics() - tags: {}", tagsMap);
Map<String, Object> fieldsMap = new HashMap<>(); tagsMap.put("servername", managedSystem.entry.getName());
fieldsMap.put("logicalMem", metrics.systemUtil.sample.lparsUtil.memory.logicalMem); tagsMap.put("lparname", entry.getName());
fieldsMap.put("backedPhysicalMem", metrics.systemUtil.sample.lparsUtil.memory.backedPhysicalMem); log.trace("getMemoryMetrics() - tags: " + tagsMap);
log.trace("getMemoryMetrics() - fields: {}", fieldsMap);
fieldsMap.put("logicalMem", metric.getSample().lparsUtil.memory.logicalMem);
fieldsMap.put("backedPhysicalMem", metric.getSample().lparsUtil.memory.backedPhysicalMem);
log.trace("getMemoryMetrics() - fields: " + fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
} catch (Exception e) {
log.warn("getMemoryMetrics() - error: {}", e.getMessage());
}
return list; return list;
} }
@ -95,29 +194,35 @@ class LogicalPartition extends MetaSystem {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
try {
HashMap<String, String> tagsMap = new HashMap<>(); HashMap<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name);
tagsMap.put("lparname", name);
log.trace("getProcessorMetrics() - tags: {}", tagsMap);
HashMap<String, Object> fieldsMap = new HashMap<>(); HashMap<String, Object> fieldsMap = new HashMap<>();
fieldsMap.put("utilizedProcUnits", metrics.systemUtil.sample.lparsUtil.processor.utilizedProcUnits);
fieldsMap.put("entitledProcUnits", metrics.systemUtil.sample.lparsUtil.processor.entitledProcUnits); tagsMap.put("servername", managedSystem.entry.getName());
fieldsMap.put("donatedProcUnits", metrics.systemUtil.sample.lparsUtil.processor.donatedProcUnits); tagsMap.put("lparname", entry.getName());
fieldsMap.put("idleProcUnits", metrics.systemUtil.sample.lparsUtil.processor.idleProcUnits); log.trace("getProcessorMetrics() - tags: " + tagsMap);
fieldsMap.put("maxProcUnits", metrics.systemUtil.sample.lparsUtil.processor.maxProcUnits);
fieldsMap.put("maxVirtualProcessors", metrics.systemUtil.sample.lparsUtil.processor.maxVirtualProcessors); fieldsMap.put("utilizedProcUnits", metric.getSample().lparsUtil.processor.utilizedProcUnits);
fieldsMap.put("currentVirtualProcessors", metrics.systemUtil.sample.lparsUtil.processor.currentVirtualProcessors); fieldsMap.put("entitledProcUnits", metric.getSample().lparsUtil.processor.entitledProcUnits);
fieldsMap.put("utilizedCappedProcUnits", metrics.systemUtil.sample.lparsUtil.processor.utilizedCappedProcUnits); fieldsMap.put("donatedProcUnits", metric.getSample().lparsUtil.processor.donatedProcUnits);
fieldsMap.put("utilizedUncappedProcUnits", metrics.systemUtil.sample.lparsUtil.processor.utilizedUncappedProcUnits); fieldsMap.put("idleProcUnits", metric.getSample().lparsUtil.processor.idleProcUnits);
fieldsMap.put("timePerInstructionExecution", metrics.systemUtil.sample.lparsUtil.processor.timeSpentWaitingForDispatch); fieldsMap.put("maxProcUnits", metric.getSample().lparsUtil.processor.maxProcUnits);
fieldsMap.put("timeSpentWaitingForDispatch", metrics.systemUtil.sample.lparsUtil.processor.timePerInstructionExecution); fieldsMap.put("maxVirtualProcessors", metric.getSample().lparsUtil.processor.maxVirtualProcessors);
fieldsMap.put("mode", metrics.systemUtil.sample.lparsUtil.processor.mode); fieldsMap.put("currentVirtualProcessors", metric.getSample().lparsUtil.processor.currentVirtualProcessors);
fieldsMap.put("weight", metrics.systemUtil.sample.lparsUtil.processor.weight); fieldsMap.put("utilizedCappedProcUnits", metric.getSample().lparsUtil.processor.utilizedCappedProcUnits);
fieldsMap.put("poolId", metrics.systemUtil.sample.lparsUtil.processor.poolId); fieldsMap.put("utilizedUncappedProcUnits", metric.getSample().lparsUtil.processor.utilizedUncappedProcUnits);
log.trace("getProcessorMetrics() - fields: {}", fieldsMap); fieldsMap.put("timePerInstructionExecution", metric.getSample().lparsUtil.processor.timeSpentWaitingForDispatch);
fieldsMap.put("timeSpentWaitingForDispatch", metric.getSample().lparsUtil.processor.timePerInstructionExecution);
fieldsMap.put("mode", metric.getSample().lparsUtil.processor.mode);
fieldsMap.put("weight", metric.getSample().lparsUtil.processor.weight);
fieldsMap.put("poolId", metric.getSample().lparsUtil.processor.poolId);
log.trace("getProcessorMetrics() - fields: " + fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
} catch (Exception e) {
log.warn("getProcessorMetrics() - error: {}", e.getMessage());
}
return list; return list;
} }
@ -127,18 +232,20 @@ class LogicalPartition extends MetaSystem {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
metrics.systemUtil.sample.lparsUtil.network.virtualEthernetAdapters.forEach( adapter -> { try {
metric.getSample().lparsUtil.network.virtualEthernetAdapters.forEach(adapter -> {
HashMap<String, String> tagsMap = new HashMap<>(); HashMap<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name); HashMap<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("lparname", name);
tagsMap.put("servername", managedSystem.entry.getName());
tagsMap.put("lparname", entry.getName());
tagsMap.put("location", adapter.physicalLocation); tagsMap.put("location", adapter.physicalLocation);
tagsMap.put("viosId", adapter.viosId.toString()); tagsMap.put("viosId", adapter.viosId.toString());
tagsMap.put("vlanId", adapter.vlanId.toString()); tagsMap.put("vlanId", adapter.vlanId.toString());
tagsMap.put("vswitchId", adapter.vswitchId.toString()); tagsMap.put("vswitchId", adapter.vswitchId.toString());
log.trace("getVirtualEthernetAdapterMetrics() - tags: {}", tagsMap); log.trace("getVirtualEthernetAdapterMetrics() - tags: " + tagsMap);
HashMap<String, Object> fieldsMap = new HashMap<>();
fieldsMap.put("droppedPackets", adapter.droppedPackets); fieldsMap.put("droppedPackets", adapter.droppedPackets);
fieldsMap.put("droppedPhysicalPackets", adapter.droppedPhysicalPackets); fieldsMap.put("droppedPhysicalPackets", adapter.droppedPhysicalPackets);
fieldsMap.put("isPortVlanId", adapter.isPortVlanId); fieldsMap.put("isPortVlanId", adapter.isPortVlanId);
@ -153,47 +260,14 @@ class LogicalPartition extends MetaSystem {
fieldsMap.put("transferredBytes", adapter.transferredBytes); fieldsMap.put("transferredBytes", adapter.transferredBytes);
fieldsMap.put("transferredPhysicalBytes", adapter.transferredPhysicalBytes); fieldsMap.put("transferredPhysicalBytes", adapter.transferredPhysicalBytes);
fieldsMap.put("sharedEthernetAdapterId", adapter.sharedEthernetAdapterId); fieldsMap.put("sharedEthernetAdapterId", adapter.sharedEthernetAdapterId);
log.trace("getVirtualEthernetAdapterMetrics() - fields: {}", fieldsMap); log.trace("getVirtualEthernetAdapterMetrics() - fields: " + fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
}); });
} catch (Exception e) {
return list; log.warn("getVirtualEthernetAdapterMetrics() - error: {}", e.getMessage());
} }
// LPAR Network - SR-IOV
List<Measurement> getSriovLogicalPorts() {
List<Measurement> list = new ArrayList<>();
metrics.systemUtil.sample.lparsUtil.network.sriovLogicalPorts.forEach( port -> {
HashMap<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name);
tagsMap.put("lparname", name);
tagsMap.put("location", port.physicalLocation);
tagsMap.put("vnicDeviceMode", port.vnicDeviceMode);
tagsMap.put("configurationType", port.configurationType);
log.trace("getSriovLogicalPorts() - tags: {}", tagsMap);
HashMap<String, Object> fieldsMap = new HashMap<>();
fieldsMap.put("drcIndex", port.drcIndex);
fieldsMap.put("physicalPortId", port.physicalPortId);
fieldsMap.put("physicalDrcIndex", port.physicalDrcIndex);
fieldsMap.put("droppedPackets", port.droppedPackets);
fieldsMap.put("receivedBytes", port.receivedBytes);
fieldsMap.put("receivedPackets", port.receivedPackets);
fieldsMap.put("sentBytes", port.sentBytes);
fieldsMap.put("sentPackets", port.sentPackets);
fieldsMap.put("errorIn", port.errorIn);
fieldsMap.put("errorOut", port.errorOut);
fieldsMap.put("transferredBytes", port.transferredBytes);
log.trace("getSriovLogicalPorts() - fields: {}", fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap));
});
return list; return list;
} }
@ -202,26 +276,33 @@ class LogicalPartition extends MetaSystem {
List<Measurement> getVirtualGenericAdapterMetrics() { List<Measurement> getVirtualGenericAdapterMetrics() {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
metrics.systemUtil.sample.lparsUtil.storage.genericVirtualAdapters.forEach( adapter -> {
try {
metric.getSample().lparsUtil.storage.genericVirtualAdapters.forEach(adapter -> {
HashMap<String, String> tagsMap = new HashMap<>(); HashMap<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name); HashMap<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("lparname", name);
tagsMap.put("servername", managedSystem.entry.getName());
tagsMap.put("lparname", entry.getName());
tagsMap.put("viosId", adapter.viosId.toString()); tagsMap.put("viosId", adapter.viosId.toString());
tagsMap.put("location", adapter.physicalLocation); tagsMap.put("location", adapter.physicalLocation);
tagsMap.put("id", adapter.id); tagsMap.put("id", adapter.id);
log.trace("getVirtualGenericAdapterMetrics() - tags: {}", tagsMap); log.trace("getVirtualGenericAdapterMetrics() - tags: " + tagsMap);
HashMap<String, Object> fieldsMap = new HashMap<>();
fieldsMap.put("numOfReads", adapter.numOfReads); fieldsMap.put("numOfReads", adapter.numOfReads);
fieldsMap.put("numOfWrites", adapter.numOfWrites); fieldsMap.put("numOfWrites", adapter.numOfWrites);
fieldsMap.put("writeBytes", adapter.writeBytes); fieldsMap.put("writeBytes", adapter.writeBytes);
fieldsMap.put("readBytes", adapter.readBytes); fieldsMap.put("readBytes", adapter.readBytes);
fieldsMap.put("type", adapter.type); fieldsMap.put("type", adapter.type);
log.trace("getVirtualGenericAdapterMetrics() - fields: {}", fieldsMap); log.trace("getVirtualGenericAdapterMetrics() - fields: " + fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
}); });
} catch (Exception e) {
log.warn("getVirtualGenericAdapterMetrics() - error: {}", e.getMessage());
}
return list; return list;
} }
@ -230,16 +311,19 @@ class LogicalPartition extends MetaSystem {
List<Measurement> getVirtualFibreChannelAdapterMetrics() { List<Measurement> getVirtualFibreChannelAdapterMetrics() {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
metrics.systemUtil.sample.lparsUtil.storage.virtualFiberChannelAdapters.forEach( adapter -> {
try {
metric.getSample().lparsUtil.storage.virtualFiberChannelAdapters.forEach(adapter -> {
HashMap<String, String> tagsMap = new HashMap<>(); HashMap<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name); HashMap<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("lparname", name);
tagsMap.put("servername", managedSystem.entry.getName());
tagsMap.put("lparname", entry.getName());
tagsMap.put("viosId", adapter.viosId.toString()); tagsMap.put("viosId", adapter.viosId.toString());
tagsMap.put("location", adapter.physicalLocation); tagsMap.put("location", adapter.physicalLocation);
log.trace("getVirtualFibreChannelAdapterMetrics() - tags: {}", tagsMap); log.trace("getVirtualFibreChannelAdapterMetrics() - tags: " + tagsMap);
HashMap<String, Object> fieldsMap = new HashMap<>();
fieldsMap.put("numOfReads", adapter.numOfReads); fieldsMap.put("numOfReads", adapter.numOfReads);
fieldsMap.put("numOfWrites", adapter.numOfWrites); fieldsMap.put("numOfWrites", adapter.numOfWrites);
fieldsMap.put("writeBytes", adapter.writeBytes); fieldsMap.put("writeBytes", adapter.writeBytes);
@ -247,15 +331,51 @@ class LogicalPartition extends MetaSystem {
fieldsMap.put("runningSpeed", adapter.runningSpeed); fieldsMap.put("runningSpeed", adapter.runningSpeed);
fieldsMap.put("transmittedBytes", adapter.transmittedBytes); fieldsMap.put("transmittedBytes", adapter.transmittedBytes);
fieldsMap.put("transferredByte", adapter.transmittedBytes); // TODO: Must be error in dashboard, remove when checked. fieldsMap.put("transferredByte", adapter.transmittedBytes); // TODO: Must be error in dashboard, remove when checked.
//fieldsMap.put("wwpn", adapter.wwpn); log.trace("getVirtualFibreChannelAdapterMetrics() - fields: " + fieldsMap);
//fieldsMap.put("wwpn2", adapter.wwpn2);
log.trace("getVirtualFibreChannelAdapterMetrics() - fields: {}", fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
}); });
} catch (Exception e) {
log.warn("getVirtualFibreChannelAdapterMetrics() - error: {}", e.getMessage());
}
return list; return list;
} }
// LPAR Network - SR-IOV Logical Ports
List<Measurement> getSriovLogicalPorts() {
List<Measurement> list = new ArrayList<>();
try {
metric.getSample().lparsUtil.network.sriovLogicalPorts.forEach(port -> {
HashMap<String, String> tagsMap = new HashMap<>();
HashMap<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("servername", managedSystem.entry.getName());
tagsMap.put("lparname", entry.getName());
tagsMap.put("location", port.physicalLocation);
tagsMap.put("type", port.configurationType);
log.trace("getSriovLogicalPorts() - tags: " + tagsMap);
fieldsMap.put("sentBytes", port.sentBytes);
fieldsMap.put("receivedBytes", port.receivedBytes);
fieldsMap.put("transferredBytes", port.transferredBytes);
fieldsMap.put("sentPackets", port.sentPackets);
fieldsMap.put("receivedPackets", port.receivedPackets);
fieldsMap.put("droppedPackets", port.droppedPackets);
fieldsMap.put("errorIn", port.errorIn);
fieldsMap.put("errorOut", port.errorOut);
log.trace("getSriovLogicalPorts() - fields: " + fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap));
});
} catch (Exception e) {
log.warn("getSriovLogicalPorts() - error: {}", e.getMessage());
}
return list;
}
} }

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,264 @@
/*
* Copyright 2020 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.hmci;
import biz.nellemann.hmci.dto.toml.HmcConfiguration;
import biz.nellemann.hmci.dto.xml.Link;
import biz.nellemann.hmci.dto.xml.ManagementConsoleEntry;
import biz.nellemann.hmci.dto.xml.XmlFeed;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.File;
import java.time.Duration;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.atomic.AtomicBoolean;
import static java.lang.Thread.sleep;
class ManagementConsole implements Runnable {
private final static Logger log = LoggerFactory.getLogger(ManagementConsole.class);
private final Integer refreshValue;
private final Integer discoverValue;
private final List<ManagedSystem> managedSystems = new ArrayList<>();
private final RestClient restClient;
private final InfluxClient influxClient;
private final AtomicBoolean keepRunning = new AtomicBoolean(true);
protected Integer responseErrors = 0;
private Boolean doEnergy = true;
private final List<String> excludeSystems;
private final List<String> includeSystems;
private final List<String> excludePartitions;
private final List<String> includePartitions;
ManagementConsole(HmcConfiguration configuration, InfluxClient influxClient) {
this.refreshValue = configuration.refresh;
this.discoverValue = configuration.discover;
this.doEnergy = configuration.energy;
this.influxClient = influxClient;
restClient = new RestClient(configuration.url, configuration.username, configuration.password, configuration.trust);
if(configuration.trace != null) {
try {
File traceDir = new File(configuration.trace);
traceDir.mkdirs();
if(traceDir.canWrite()) {
Boolean doTrace = true;
} else {
log.warn("ManagementConsole() - can't write to trace dir: " + traceDir.toString());
}
} catch (Exception e) {
log.error("ManagementConsole() - trace error: " + e.getMessage());
}
}
this.excludeSystems = configuration.excludeSystems;
this.includeSystems = configuration.includeSystems;
this.excludePartitions = configuration.excludePartitions;
this.includePartitions = configuration.includePartitions;
}
@Override
public void run() {
log.trace("run()");
Instant lastDiscover = Instant.now();
restClient.login();
discover();
do {
Instant instantStart = Instant.now();
try {
refresh();
if(instantStart.isAfter(lastDiscover.plus(discoverValue, ChronoUnit.MINUTES))) {
lastDiscover = instantStart;
discover();
}
} catch (Exception e) {
log.error("run() - fatal error: {}", e.getMessage());
keepRunning.set(false);
throw new RuntimeException(e);
}
Instant instantEnd = Instant.now();
long timeSpend = Duration.between(instantStart, instantEnd).toMillis();
log.trace("run() - duration millis: " + timeSpend);
if(timeSpend < (refreshValue * 1000)) {
try {
long sleepTime = (refreshValue * 1000) - timeSpend;
log.trace("run() - sleeping millis: " + sleepTime);
if(sleepTime > 0) {
//noinspection BusyWait
sleep(sleepTime);
}
} catch (InterruptedException e) {
log.error("run() - sleep interrupted", e);
}
} else {
log.warn("run() - possible slow response from this HMC");
}
} while (keepRunning.get());
// Logout of HMC
restClient.logoff();
}
public void discover() {
try {
String xml = restClient.getRequest("/rest/api/uom/ManagementConsole");
// Do not try to parse empty response
if(xml == null || xml.length() <= 1) {
responseErrors++;
log.warn("discover() - no data.");
return;
}
XmlMapper xmlMapper = new XmlMapper();
XmlFeed xmlFeed = xmlMapper.readValue(xml, XmlFeed.class);
ManagementConsoleEntry entry;
if(xmlFeed.getEntry() == null){
log.warn("discover() - xmlFeed.entry == null");
return;
}
if(xmlFeed.getEntry().getContent().isManagementConsole()) {
entry = xmlFeed.getEntry().getContent().getManagementConsole();
//log.info("discover() - {}", entry.getName());
} else {
throw new UnsupportedOperationException("Failed to deserialize ManagementConsole");
}
managedSystems.clear();
for (Link link : entry.getAssociatedManagedSystems()) {
ManagedSystem managedSystem = new ManagedSystem(restClient, link.getHref());
managedSystem.setExcludePartitions(excludePartitions);
managedSystem.setIncludePartitions(includePartitions);
managedSystem.discover();
// Only continue for powered-on operating systems
if(managedSystem.entry != null && Objects.equals(managedSystem.entry.state, "operating")) {
if(doEnergy) {
managedSystem.getPcmPreferences();
managedSystem.setDoEnergy(doEnergy);
}
// Check exclude / include
if (!excludeSystems.contains(managedSystem.name) && includeSystems.isEmpty()) {
managedSystems.add(managedSystem);
//log.info("discover() - adding !excluded system: {}", managedSystem.name);
} else if (!includeSystems.isEmpty() && includeSystems.contains(managedSystem.name)) {
managedSystems.add(managedSystem);
//log.info("discover() - adding included system: {}", managedSystem.name);
}
}
}
} catch (Exception e) {
log.warn("discover() - error: {}", e.getMessage());
}
}
void refresh() {
log.debug("refresh()");
managedSystems.forEach( (system) -> {
if(system.entry == null){
log.warn("refresh() - no data.");
return;
}
system.refresh();
influxClient.write(system.getDetails(), system.getTimestamp(),"server_details");
influxClient.write(system.getMemoryMetrics(), system.getTimestamp(),"server_memory");
influxClient.write(system.getProcessorMetrics(), system.getTimestamp(),"server_processor");
influxClient.write(system.getPhysicalProcessorPool(), system.getTimestamp(),"server_physicalProcessorPool");
influxClient.write(system.getSharedProcessorPools(), system.getTimestamp(),"server_sharedProcessorPool");
if(system.systemEnergy != null) {
system.systemEnergy.refresh();
if(system.systemEnergy.metric != null) {
influxClient.write(system.systemEnergy.getPowerMetrics(), system.getTimestamp(), "server_energy_power");
influxClient.write(system.systemEnergy.getThermalMetrics(), system.getTimestamp(), "server_energy_thermal");
}
}
influxClient.write(system.getVioDetails(), system.getTimestamp(),"vios_details");
influxClient.write(system.getVioProcessorMetrics(), system.getTimestamp(),"vios_processor");
influxClient.write(system.getVioMemoryMetrics(), system.getTimestamp(),"vios_memory");
influxClient.write(system.getVioNetworkLpars(), system.getTimestamp(),"vios_network_lpars");
influxClient.write(system.getVioNetworkVirtualAdapters(), system.getTimestamp(),"vios_network_virtual");
influxClient.write(system.getVioNetworkSharedAdapters(), system.getTimestamp(),"vios_network_shared");
influxClient.write(system.getVioNetworkGenericAdapters(), system.getTimestamp(),"vios_network_generic");
influxClient.write(system.getVioStorageLpars(), system.getTimestamp(),"vios_storage_lpars");
influxClient.write(system.getVioStorageFiberChannelAdapters(), system.getTimestamp(),"vios_storage_FC");
influxClient.write(system.getVioStorageVirtualAdapters(), system.getTimestamp(),"vios_storage_vFC");
influxClient.write(system.getVioStoragePhysicalAdapters(), system.getTimestamp(),"vios_storage_physical");
// Missing: vios_storage_SSP
system.logicalPartitions.forEach( (partition) -> {
partition.refresh();
influxClient.write(partition.getDetails(), partition.getTimestamp(),"lpar_details");
influxClient.write(partition.getMemoryMetrics(), partition.getTimestamp(),"lpar_memory");
influxClient.write(partition.getProcessorMetrics(), partition.getTimestamp(),"lpar_processor");
influxClient.write(partition.getSriovLogicalPorts(), partition.getTimestamp(),"lpar_net_sriov");
influxClient.write(partition.getVirtualEthernetAdapterMetrics(), partition.getTimestamp(),"lpar_net_virtual");
influxClient.write(partition.getVirtualGenericAdapterMetrics(), partition.getTimestamp(),"lpar_storage_virtual");
influxClient.write(partition.getVirtualFibreChannelAdapterMetrics(), partition.getTimestamp(),"lpar_storage_vFC");
});
});
}
/*
private void writeTraceFile(String id, String json) {
String fileName = String.format("%s-%s.json", id, Instant.now().toString());
try {
log.debug("Writing trace file: " + fileName);
File traceFile = new File(traceDir, fileName);
BufferedWriter writer = new BufferedWriter(new FileWriter(traceFile));
writer.write(json);
writer.close();
} catch (IOException e) {
log.warn("writeTraceFile() - " + e.getMessage());
}
}
*/
}

View file

@ -1,133 +0,0 @@
/*
* Copyright 2020 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.hmci;
import biz.nellemann.hmci.pcm.PcmData;
import com.serjltt.moshi.adapters.FirstElement;
import com.squareup.moshi.FromJson;
import com.squareup.moshi.JsonAdapter;
import com.squareup.moshi.Moshi;
import com.squareup.moshi.ToJson;
import java.io.IOException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.math.BigDecimal;
import java.time.Instant;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeParseException;
abstract class MetaSystem {
private final static Logger log = LoggerFactory.getLogger(MetaSystem.class);
private final JsonAdapter<PcmData> jsonAdapter;
protected PcmData metrics;
MetaSystem() {
try {
Moshi moshi = new Moshi.Builder().add(new NumberAdapter()).add(new BigDecimalAdapter()).add(FirstElement.ADAPTER_FACTORY).build();
jsonAdapter = moshi.adapter(PcmData.class);
} catch(Exception e) {
log.warn("MetaSystem() error", e);
throw new ExceptionInInitializerError(e);
}
}
void processMetrics(String json) {
try {
metrics = jsonAdapter.nullSafe().fromJson(json);
} catch(IOException e) {
log.warn("processMetrics() error", e);
}
//System.out.println(jsonAdapter.toJson(metrics));
}
Instant getTimestamp() {
String timestamp = getStringMetricObject(metrics.systemUtil.sample.sampleInfo.timeStamp);
Instant instant = Instant.now();
try {
log.trace("getTimeStamp() - PMC Timestamp: {}", timestamp);
DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss[XXX][X]");
instant = Instant.from(dateTimeFormatter.parse(timestamp));
log.trace("getTimestamp() - Instant: {}", instant.toString());
} catch(DateTimeParseException e) {
log.warn("getTimestamp() - parse error: {}", timestamp);
}
return instant;
}
String getStringMetricObject(Object obj) {
String metric = null;
try {
metric = (String) obj;
} catch (NullPointerException npe) {
log.warn("getStringMetricObject()", npe);
}
return metric;
}
Number getNumberMetricObject(Object obj) {
Number metric = null;
try {
metric = (Number) obj;
} catch (NullPointerException npe) {
log.warn("getNumberMetricObject()", npe);
}
return metric;
}
static class BigDecimalAdapter {
@FromJson
BigDecimal fromJson(String string) {
return new BigDecimal(string);
}
@ToJson
String toJson(BigDecimal value) {
return value.toString();
}
}
static class NumberAdapter {
@FromJson
Number fromJson(String string) {
return Double.parseDouble(string);
}
@ToJson
String toJson(Number value) {
return value.toString();
}
}
}

View file

@ -0,0 +1,64 @@
package biz.nellemann.hmci;
import biz.nellemann.hmci.dto.json.ProcessedMetrics;
import biz.nellemann.hmci.dto.json.SystemUtil;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Instant;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeParseException;
public class Resource {
private final static Logger log = LoggerFactory.getLogger(Resource.class);
private final ObjectMapper objectMapper = new ObjectMapper();
protected SystemUtil metric;
Resource() {
objectMapper.enable(DeserializationFeature.UNWRAP_SINGLE_VALUE_ARRAYS);
objectMapper.enable(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY);
objectMapper.enable(DeserializationFeature.ACCEPT_EMPTY_STRING_AS_NULL_OBJECT);
}
void deserialize(String json) {
if(json == null || json.length() < 1) {
return;
}
try {
ProcessedMetrics processedMetrics = objectMapper.readValue(json, ProcessedMetrics.class);
metric = processedMetrics.systemUtil;
} catch (Exception e) {
log.error("deserialize() - error: {}", e.getMessage());
}
}
Instant getTimestamp() {
Instant instant = Instant.now();
if (metric == null) {
return instant;
}
String timestamp = metric.getSample().sampleInfo.timestamp;
try {
log.trace("getTimeStamp() - PMC Timestamp: {}", timestamp);
DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss[XXX][X]");
instant = Instant.from(dateTimeFormatter.parse(timestamp));
log.trace("getTimestamp() - Instant: {}", instant.toString());
} catch(DateTimeParseException e) {
log.warn("getTimestamp() - parse error: {}", timestamp);
}
return instant;
}
}

View file

@ -0,0 +1,303 @@
package biz.nellemann.hmci;
import biz.nellemann.hmci.dto.xml.LogonResponse;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import okhttp3.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import java.io.*;
import java.net.*;
import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
public class RestClient {
private final static Logger log = LoggerFactory.getLogger(RestClient.class);
private final MediaType MEDIA_TYPE_IBM_XML_LOGIN = MediaType.parse("application/vnd.ibm.powervm.web+xml; type=LogonRequest");
private final MediaType MEDIA_TYPE_IBM_XML_POST = MediaType.parse("application/xml, application/vnd.ibm.powervm.pcm.dita");
protected OkHttpClient httpClient;
// OkHttpClient timeouts
private final static int CONNECT_TIMEOUT = 30;
private final static int WRITE_TIMEOUT = 30;
private final static int READ_TIMEOUT = 180;
protected String authToken;
protected final String baseUrl;
protected final String username;
protected final String password;
public RestClient(String baseUrl, String username, String password, Boolean trustAll) {
this.baseUrl = baseUrl;
this.username = username;
this.password = password;
if (trustAll) {
this.httpClient = getUnsafeOkHttpClient();
} else {
this.httpClient = getSafeOkHttpClient();
}
}
/**
* Logon to the HMC and get an authentication token for further requests.
*/
public synchronized void login() {
log.info("Connecting to HMC - {} @ {}", username, baseUrl);
StringBuilder payload = new StringBuilder();
payload.append("<?xml version='1.0' encoding='UTF-8' standalone='yes'?>");
payload.append("<LogonRequest xmlns='http://www.ibm.com/xmlns/systems/power/firmware/web/mc/2012_10/' schemaVersion='V1_0'>");
payload.append("<UserID>").append(username).append("</UserID>");
payload.append("<Password>").append(password).append("</Password>");
payload.append("</LogonRequest>");
try {
//httpClient.start();
URL url = new URL(String.format("%s/rest/api/web/Logon", baseUrl));
Request request = new Request.Builder()
.url(url)
.addHeader("Accept", "application/vnd.ibm.powervm.web+xml; type=LogonResponse")
.addHeader("X-Audit-Memento", "IBM Power HMC Insights")
.put(RequestBody.create(payload.toString(), MEDIA_TYPE_IBM_XML_LOGIN))
.build();
String responseBody;
try (Response response = httpClient.newCall(request).execute()) {
responseBody = Objects.requireNonNull(response.body()).string();
if (!response.isSuccessful()) {
log.warn("login() - Unexpected response: {}", response.code());
throw new IOException("Unexpected code: " + response);
}
}
XmlMapper xmlMapper = new XmlMapper();
LogonResponse logonResponse = xmlMapper.readValue(responseBody, LogonResponse.class);
authToken = logonResponse.getToken();
log.debug("logon() - auth token: {}", authToken);
} catch (Exception e) {
log.warn("logon() - error: {}", e.getMessage());
}
}
/**
* Logoff from the HMC and remove any session
*
*/
synchronized void logoff() {
if(authToken == null) {
return;
}
try {
URL url = new URL(String.format("%s/rest/api/web/Logon", baseUrl));
Request request = new Request.Builder()
.url(url)
.addHeader("Content-Type", "application/vnd.ibm.powervm.web+xml; type=LogonRequest")
.addHeader("X-API-Session", authToken)
.delete()
.build();
String responseBody;
try (Response response = httpClient.newCall(request).execute()) {
responseBody = Objects.requireNonNull(response.body()).string();
} catch (IOException e) {
log.warn("logoff() error: {}", e.getMessage());
} finally {
authToken = null;
}
} catch (MalformedURLException e) {
log.warn("logoff() - error: {}", e.getMessage());
}
}
public String getRequest(String urlPath) throws IOException {
URL absUrl = new URL(String.format("%s%s", baseUrl, urlPath));
return getRequest(absUrl);
}
public String postRequest(String urlPath, String payload) throws IOException {
URL absUrl = new URL(String.format("%s%s", baseUrl, urlPath));
return postRequest(absUrl, payload);
}
/**
* Return a Response from the HMC
* @param url to get Response from
* @return Response body string
*/
public synchronized String getRequest(URL url) throws IOException {
log.trace("getRequest() - URL: {}", url.toString());
Request request = new Request.Builder()
.url(url)
.addHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8")
.addHeader("X-API-Session", (authToken == null ? "" : authToken))
.get().build();
String responseBody;
try (Response response = httpClient.newCall(request).execute()) {
responseBody = Objects.requireNonNull(response.body()).string();;
if (!response.isSuccessful()) {
// Auth. failure
if(response.code() == 401) {
log.warn("getRequest() - 401 - login and retry.");
// Let's login again and retry
login();
return retryGetRequest(url);
}
log.error("getRequest() - Unexpected response: {}", response.code());
throw new IOException("getRequest() - Unexpected response: " + response.code());
}
}
return responseBody;
}
private String retryGetRequest(URL url) throws IOException {
log.debug("retryGetRequest() - URL: {}", url.toString());
Request request = new Request.Builder()
.url(url)
.addHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8")
.addHeader("X-API-Session", (authToken == null ? "" : authToken))
.get().build();
String responseBody = null;
try (Response responseRetry = httpClient.newCall(request).execute()) {
if(responseRetry.isSuccessful()) {
responseBody = responseRetry.body().string();
}
}
return responseBody;
}
/**
* Send a POST request with a payload (can be null) to the HMC
* @param url
* @param payload
* @return
* @throws IOException
*/
public synchronized String postRequest(URL url, String payload) throws IOException {
log.debug("sendPostRequest() - URL: {}", url.toString());
RequestBody requestBody;
if(payload != null) {
requestBody = RequestBody.create(payload, MEDIA_TYPE_IBM_XML_POST);
} else {
requestBody = RequestBody.create("", null);
}
Request request = new Request.Builder()
.url(url)
.addHeader("content-type", "application/xml")
.addHeader("X-API-Session", (authToken == null ? "" : authToken) )
.post(requestBody).build();
String responseBody;
try (Response response = httpClient.newCall(request).execute()) {
responseBody = Objects.requireNonNull(response.body()).string();
if (!response.isSuccessful()) {
response.close();
//log.warn(responseBody);
log.error("sendPostRequest() - Unexpected response: {}", response.code());
throw new IOException("sendPostRequest() - Unexpected response: " + response.code());
}
}
return responseBody;
}
/**
* Provide an unsafe (ignoring SSL problems) OkHttpClient
*
* @return OkHttpClient ignoring SSL/TLS errors
*/
private static OkHttpClient getUnsafeOkHttpClient() {
try {
// Create a trust manager that does not validate certificate chains
final TrustManager[] trustAllCerts = new TrustManager[] {
new X509TrustManager() {
@Override
public void checkClientTrusted(X509Certificate[] chain, String authType) { }
@Override
public void checkServerTrusted(X509Certificate[] chain, String authType) {
}
@Override
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[]{};
}
}
};
// Install the all-trusting trust manager
final SSLContext sslContext = SSLContext.getInstance("SSL");
sslContext.init(null, trustAllCerts, new SecureRandom());
// Create a ssl socket factory with our all-trusting manager
final SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory();
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.sslSocketFactory(sslSocketFactory, (X509TrustManager)trustAllCerts[0]);
builder.hostnameVerifier((hostname, session) -> true);
builder.connectTimeout(CONNECT_TIMEOUT, TimeUnit.SECONDS);
builder.writeTimeout(WRITE_TIMEOUT, TimeUnit.SECONDS);
builder.readTimeout(READ_TIMEOUT, TimeUnit.SECONDS);
return builder.build();
} catch (KeyManagementException | NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
}
/**
* Get OkHttpClient with our preferred timeout values.
* @return OkHttpClient
*/
private static OkHttpClient getSafeOkHttpClient() {
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.connectTimeout(CONNECT_TIMEOUT, TimeUnit.SECONDS);
builder.writeTimeout(WRITE_TIMEOUT, TimeUnit.SECONDS);
builder.readTimeout(READ_TIMEOUT, TimeUnit.SECONDS);
return builder.build();
}
}

View file

@ -1,60 +1,90 @@
/*
* Copyright 2020 Mark Nellemann <mark.nellemann@gmail.com>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package biz.nellemann.hmci; package biz.nellemann.hmci;
import biz.nellemann.hmci.pcm.Temperature; import biz.nellemann.hmci.dto.xml.Link;
import biz.nellemann.hmci.dto.xml.XmlFeed;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import java.util.ArrayList; import java.io.IOException;
import java.util.HashMap; import java.net.URI;
import java.util.List; import java.util.*;
import java.util.Map;
class SystemEnergy extends MetaSystem { class SystemEnergy extends Resource {
private final static Logger log = LoggerFactory.getLogger(SystemEnergy.class); private final static Logger log = LoggerFactory.getLogger(SystemEnergy.class);
public final ManagedSystem system; private final RestClient restClient;
private final ManagedSystem managedSystem;
protected String id;
protected String name;
SystemEnergy(ManagedSystem system) { public SystemEnergy(RestClient restClient, ManagedSystem managedSystem) {
this.system = system; log.debug("SystemEnergy()");
this.restClient = restClient;
this.managedSystem = managedSystem;
} }
@Override public void refresh() {
public String toString() {
return system.name; log.debug("refresh()");
try {
String xml = restClient.getRequest(String.format("/rest/api/pcm/ManagedSystem/%s/ProcessedMetrics?Type=Energy&NoOfSamples=1", managedSystem.id));
// Do not try to parse empty response
if(xml == null || xml.length() <= 1) {
log.debug("refresh() - no data."); // We do not log as 'warn' as many systems do not have this enabled.
return;
} }
XmlMapper xmlMapper = new XmlMapper();
XmlFeed xmlFeed = xmlMapper.readValue(xml, XmlFeed.class);
xmlFeed.entries.forEach((entry) -> {
if (entry.category.term.equals("ManagedSystem")) {
Link link = entry.link;
if (link.getType() != null && Objects.equals(link.getType(), "application/json")) {
try {
URI jsonUri = URI.create(link.getHref());
String json = restClient.getRequest(jsonUri.getPath());
deserialize(json);
} catch (IOException e) {
log.error("refresh() - error 1: {}", e.getMessage());
}
}
}
});
} catch (IOException e) {
log.error("refresh() - error: {} {}", e.getClass(), e.getMessage());
}
}
List<Measurement> getPowerMetrics() { List<Measurement> getPowerMetrics() {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
try {
HashMap<String, String> tagsMap = new HashMap<>(); HashMap<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name); Map<String, Object> fieldsMap = new HashMap<>();
tagsMap.put("servername", managedSystem.name);
log.trace("getPowerMetrics() - tags: {}", tagsMap); log.trace("getPowerMetrics() - tags: {}", tagsMap);
Map<String, Object> fieldsMap = new HashMap<>(); fieldsMap.put("powerReading", metric.getSample().energyUtil.powerUtil.powerReading);
fieldsMap.put("powerReading", metrics.systemUtil.sample.energyUtil.powerUtil.powerReading);
log.trace("getPowerMetrics() - fields: {}", fieldsMap); log.trace("getPowerMetrics() - fields: {}", fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
} catch (Exception e) {
log.warn("getPowerMetrics() - error: {}", e.getMessage());
}
return list; return list;
} }
@ -62,28 +92,36 @@ class SystemEnergy extends MetaSystem {
List<Measurement> getThermalMetrics() { List<Measurement> getThermalMetrics() {
List<Measurement> list = new ArrayList<>(); List<Measurement> list = new ArrayList<>();
try {
HashMap<String, String> tagsMap = new HashMap<>(); HashMap<String, String> tagsMap = new HashMap<>();
tagsMap.put("servername", system.name);
log.trace("getThermalMetrics() - tags: {}", tagsMap);
Map<String, Object> fieldsMap = new HashMap<>(); Map<String, Object> fieldsMap = new HashMap<>();
for(Temperature t : metrics.systemUtil.sample.energyUtil.thermalUtil.cpuTemperatures) { tagsMap.put("servername", managedSystem.name);
fieldsMap.put("cpuTemperature_" + t.entityInstance, t.temperatureReading); log.trace("getThermalMetrics() - tags: {}", tagsMap);
}
for(Temperature t : metrics.systemUtil.sample.energyUtil.thermalUtil.inletTemperatures) { metric.getSample().energyUtil.thermalUtil.cpuTemperatures.forEach((t) -> {
fieldsMap.put("cpuTemperature_" + t.entityInstance, t.temperatureReading);
});
metric.getSample().energyUtil.thermalUtil.inletTemperatures.forEach((t) -> {
fieldsMap.put("inletTemperature_" + t.entityInstance, t.temperatureReading); fieldsMap.put("inletTemperature_" + t.entityInstance, t.temperatureReading);
} });
/* Disabled, not sure if useful /* Disabled, not sure if useful
for(Temperature t : metrics.systemUtil.sample.energyUtil.thermalUtil.baseboardTemperatures) { for(Temperature t : metrics.systemUtil.sample.energyUtil.thermalUtil.baseboardTemperatures) {
fieldsMap.put("baseboardTemperature_" + t.entityInstance, t.temperatureReading); fieldsMap.put("baseboardTemperature_" + t.entityInstance, t.temperatureReading);
}*/ }*/
log.trace("getThermalMetrics() - fields: {}", fieldsMap); log.trace("getThermalMetrics() - fields: {}", fieldsMap);
list.add(new Measurement(tagsMap, fieldsMap)); list.add(new Measurement(tagsMap, fieldsMap));
} catch (Exception e) {
log.warn("getThermalMetrics() - error: {}", e.getMessage());
}
return list; return list;
} }
} }

View file

@ -0,0 +1,66 @@
package biz.nellemann.hmci;
import biz.nellemann.hmci.dto.xml.VirtualIOServerEntry;
import biz.nellemann.hmci.dto.xml.XmlEntry;
import com.fasterxml.jackson.dataformat.xml.XmlMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.URI;
import java.net.URISyntaxException;
public class VirtualIOServer {
private final static Logger log = LoggerFactory.getLogger(VirtualIOServer.class);
private final RestClient restClient;
private final ManagedSystem managedSystem;
protected String id;
private String uriPath;
protected VirtualIOServerEntry entry;
public VirtualIOServer(RestClient restClient, String href, ManagedSystem system) {
log.debug("VirtualIOServer() - {}", href);
this.restClient = restClient;
this.managedSystem = system;
try {
URI uri = new URI(href);
uriPath = uri.getPath();
//refresh();
} catch (URISyntaxException e) {
log.error("VirtualIOServer() - {}", e.getMessage());
}
}
public void discover() {
try {
String xml = restClient.getRequest(uriPath);
// Do not try to parse empty response
if(xml == null || xml.length() <= 1) {
log.warn("discover() - no data.");
return;
}
XmlMapper xmlMapper = new XmlMapper();
XmlEntry xmlEntry = xmlMapper.readValue(xml, XmlEntry.class);
if(xmlEntry.getContent() == null){
log.warn("discover() - no content.");
return;
}
if(xmlEntry.getContent().isVirtualIOServer()) {
entry = xmlEntry.getContent().getVirtualIOServerEntry();
log.debug("discover() - {}", entry.getName());
} else {
throw new UnsupportedOperationException("Failed to deserialize VirtualIOServer");
}
} catch (Exception e) {
log.error("discover() - error: {}", e.getMessage());
}
}
}

View file

@ -0,0 +1,8 @@
package biz.nellemann.hmci.dto.json;
public final class EnergyUtil {
public PowerUtil powerUtil = new PowerUtil();
public ThermalUtil thermalUtil = new ThermalUtil();
}

View file

@ -0,0 +1,22 @@
package biz.nellemann.hmci.dto.json;
/**
* Storage adapter
*/
public final class FiberChannelAdapter {
public String id;
public String wwpn;
public String physicalLocation;
public int numOfPorts;
public double numOfReads;
public double numOfWrites;
public double readBytes;
public double writeBytes;
public double runningSpeed;
public double transmittedBytes;
}

View file

@ -0,0 +1,16 @@
package biz.nellemann.hmci.dto.json;
public final class GenericAdapter {
public String id;
public String type = "";
public String physicalLocation = "";
public double receivedPackets = 0.0;
public double sentPackets = 0.0;
public double droppedPackets = 0.0;
public double sentBytes = 0.0;
public double receivedBytes = 0.0;
public double transferredBytes = 0.0;
}

View file

@ -0,0 +1,15 @@
package biz.nellemann.hmci.dto.json;
public final class GenericPhysicalAdapters {
public String id;
public String type;
public String physicalLocation;
public double numOfReads;
public double numOfWrites;
public double readBytes;
public double writeBytes;
public double transmittedBytes;
}

View file

@ -0,0 +1,22 @@
package biz.nellemann.hmci.dto.json;
import com.fasterxml.jackson.annotation.JsonIgnore;
/**
* Storage adapter
*/
public final class GenericVirtualAdapter {
public String id = "";
public String type = "";
public Integer viosId = 0;
public String physicalLocation = "";
public Double numOfReads = 0.0;
public Double numOfWrites = 0.0;
public Double readBytes = 0.0;
public Double writeBytes = 0.0;
public Double transmittedBytes = 0.0;
}

View file

@ -0,0 +1,12 @@
package biz.nellemann.hmci.dto.json;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
@JsonIgnoreProperties(ignoreUnknown = true)
public final class LparMemory {
public Double logicalMem;
public Double utilizedMem = 0.0;
public Double backedPhysicalMem = 0.0;
}

View file

@ -0,0 +1,21 @@
package biz.nellemann.hmci.dto.json;
public final class LparProcessor {
public Integer poolId = 0;
public Integer weight = 0;
public String mode = "";
public Double maxVirtualProcessors = 0.0;
public Double currentVirtualProcessors = 0.0;
public Double maxProcUnits = 0.0;
public Double entitledProcUnits = 0.0;
public Double utilizedProcUnits = 0.0;
public Double utilizedCappedProcUnits = 0.0;
public Double utilizedUncappedProcUnits = 0.0;
public Double idleProcUnits = 0.0;
public Double donatedProcUnits = 0.0;
public Double timeSpentWaitingForDispatch = 0.0;
public Double timePerInstructionExecution = 0.0;
}

View file

@ -1,5 +1,8 @@
package biz.nellemann.hmci.pcm; package biz.nellemann.hmci.dto.json;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
@JsonIgnoreProperties(ignoreUnknown = true)
public final class LparUtil { public final class LparUtil {
public Integer id = 0; public Integer id = 0;
@ -8,7 +11,7 @@ public final class LparUtil {
public String state = ""; public String state = "";
public String type = ""; public String type = "";
public String osType = ""; public String osType = "";
public Number affinityScore = 0.0f; public Float affinityScore = 0.0f;
public final LparMemory memory = new LparMemory(); public final LparMemory memory = new LparMemory();
public final LparProcessor processor = new LparProcessor(); public final LparProcessor processor = new LparProcessor();

View file

@ -0,0 +1,16 @@
package biz.nellemann.hmci.dto.json;
import java.util.ArrayList;
import java.util.List;
public final class Network {
public List<String> clientLpars = new ArrayList<>();
public List<GenericAdapter> genericAdapters = new ArrayList<>();
public List<SharedAdapter> sharedAdapters = new ArrayList<>();
public List<VirtualEthernetAdapter> virtualEthernetAdapters = new ArrayList<>();
public List<SRIOVAdapter> sriovAdapters = new ArrayList<>();
public List<SRIOVLogicalPort> sriovLogicalPorts = new ArrayList<>();
}

View file

@ -0,0 +1,12 @@
package biz.nellemann.hmci.dto.json;
public final class PhysicalProcessorPool {
public double assignedProcUnits = 0.0;
public double utilizedProcUnits = 0.0;
public double availableProcUnits = 0.0;
public double configuredProcUnits = 0.0;
public double borrowedProcUnits = 0.0;
}

View file

@ -0,0 +1,7 @@
package biz.nellemann.hmci.dto.json;
public final class PowerUtil {
public Number powerReading = 0.0;
}

View file

@ -0,0 +1,7 @@
package biz.nellemann.hmci.dto.json;
public class ProcessedMetrics {
public SystemUtil systemUtil;
}

View file

@ -0,0 +1,11 @@
package biz.nellemann.hmci.dto.json;
import java.util.List;
public final class SRIOVAdapter {
public String drcIndex = "";
public List<SRIOVPhysicalPort> physicalPorts;
}

View file

@ -0,0 +1,21 @@
package biz.nellemann.hmci.dto.json;
public class SRIOVLogicalPort {
public String drcIndex;
public String physicalLocation;
public String physicalDrcIndex;
public Number physicalPortId;
public String clientPartitionUUID;
public String vnicDeviceMode;
public String configurationType;
public Number receivedPackets;
public Number sentPackets;
public Number droppedPackets;
public Number sentBytes;
public Number receivedBytes;
public Number errorIn;
public Number errorOut;
public Number transferredBytes;
}

View file

@ -1,44 +1,20 @@
package biz.nellemann.hmci.pcm; package biz.nellemann.hmci.dto.json;
import com.serjltt.moshi.adapters.FirstElement; public final class SRIOVPhysicalPort {
public class SriovLogicalPort {
public String drcIndex = "";
public String id;
public String physicalLocation = ""; // "U78CA.001.CSS0CXA-P1-C2-C1-T1-S2" public String physicalLocation = ""; // "U78CA.001.CSS0CXA-P1-C2-C1-T1-S2"
public String physicalDrcIndex = ""; public String physicalDrcIndex = "";
public Number physicalPortId = 0; public Number physicalPortId = 0;
public String vnicDeviceMode = ""; // "NonVNIC" public String vnicDeviceMode = ""; // "NonVNIC"
public String configurationType = ""; // "Ethernet" public String configurationType = ""; // "Ethernet"
@FirstElement
public Number receivedPackets = 0.0; public Number receivedPackets = 0.0;
@FirstElement
public Number sentPackets = 0.0; public Number sentPackets = 0.0;
@FirstElement
public Number droppedPackets = 0.0; public Number droppedPackets = 0.0;
@FirstElement
public Number sentBytes = 0.0; public Number sentBytes = 0.0;
@FirstElement
public Number receivedBytes = 0.0; public Number receivedBytes = 0.0;
@FirstElement
public Number errorIn = 0.0; public Number errorIn = 0.0;
@FirstElement
public Number errorOut = 0.0; public Number errorOut = 0.0;
@FirstElement
public Number transferredBytes = 0.0; public Number transferredBytes = 0.0;
} }

View file

@ -0,0 +1,29 @@
package biz.nellemann.hmci.dto.json;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.util.List;
public final class SampleInfo {
@JsonProperty("timeStamp")
public String timestamp ;
public String getTimeStamp() {
return timestamp;
}
public Integer status ;
@JsonProperty("errorInfo")
public List<ErrorInfo> errors;
static class ErrorInfo {
public String errId;
public String errMsg;
public String uuid;
public String reportedBy;
public Integer occurrenceCount;
}
}

View file

@ -0,0 +1,11 @@
package biz.nellemann.hmci.dto.json;
public final class ServerMemory {
public double totalMem = 0.0;
public double availableMem = 0.0;
public double configurableMem = 0.0;
public double assignedMemToLpars = 0.0;
public double virtualPersistentMem = 0.0;
}

View file

@ -0,0 +1,10 @@
package biz.nellemann.hmci.dto.json;
public final class ServerProcessor {
public Double totalProcUnits = 0.0;
public Double utilizedProcUnits = 0.0;
public Double availableProcUnits = 0.0;
public Double configurableProcUnits = 0.0;
}

View file

@ -0,0 +1,15 @@
package biz.nellemann.hmci.dto.json;
import java.util.ArrayList;
import java.util.List;
public final class ServerUtil {
public final ServerProcessor processor = new ServerProcessor();
public final ServerMemory memory = new ServerMemory();
public PhysicalProcessorPool physicalProcessorPool = new PhysicalProcessorPool();
public List<SharedProcessorPool> sharedProcessorPool = new ArrayList<>();
public Network network = new Network();
}

View file

@ -0,0 +1,25 @@
package biz.nellemann.hmci.dto.json;
import java.util.List;
/**
* Network adapter
*/
public final class SharedAdapter {
public String id;
public String type;
public String physicalLocation;
public double receivedPackets;
public double sentPackets;
public double droppedPackets;
public double sentBytes;
public double receivedBytes;
public double transferredBytes;
public List<String> bridgedAdapters;
}

View file

@ -0,0 +1,15 @@
package biz.nellemann.hmci.dto.json;
public final class SharedProcessorPool {
public int id;
public String name;
public double assignedProcUnits = 0.0;
public double utilizedProcUnits = 0.0;
public double availableProcUnits = 0.0;
public double configuredProcUnits = 0.0;
public double borrowedProcUnits = 0.0;
}

View file

@ -0,0 +1,15 @@
package biz.nellemann.hmci.dto.json;
import java.util.ArrayList;
import java.util.List;
public final class Storage {
public List<String> clientLpars = new ArrayList<>();
public List<GenericPhysicalAdapters> genericPhysicalAdapters = new ArrayList<>();
public List<GenericVirtualAdapter> genericVirtualAdapters = new ArrayList<>();
public List<FiberChannelAdapter> fiberChannelAdapters = new ArrayList<>();
public List<VirtualFiberChannelAdapter> virtualFiberChannelAdapters = new ArrayList<>();
}

View file

@ -0,0 +1,14 @@
package biz.nellemann.hmci.dto.json;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonUnwrapped;
public final class SystemFirmware {
@JsonUnwrapped
public Double utilizedProcUnits;// = 0.0;
public Double assignedMem = 0.0;
}

View file

@ -0,0 +1,24 @@
package biz.nellemann.hmci.dto.json;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonUnwrapped;
import java.util.List;
public final class SystemUtil {
@JsonProperty("utilInfo")
public UtilInfo utilInfo;
public UtilInfo getUtilInfo() {
return utilInfo;
}
@JsonUnwrapped
@JsonProperty("utilSamples")
public List<UtilSample> samples;
public UtilSample getSample() {
return samples.size() > 0 ? samples.get(0) : new UtilSample();
}
}

View file

@ -1,13 +1,9 @@
package biz.nellemann.hmci.pcm; package biz.nellemann.hmci.dto.json;
import com.serjltt.moshi.adapters.FirstElement;
public final class Temperature { public final class Temperature {
public String entityId = ""; public String entityId = "";
public String entityInstance = ""; public String entityInstance = "";
@FirstElement
public Number temperatureReading = 0.0; public Number temperatureReading = 0.0;
} }

View file

@ -0,0 +1,12 @@
package biz.nellemann.hmci.dto.json;
import java.util.ArrayList;
import java.util.List;
public final class ThermalUtil {
public List<Temperature> inletTemperatures = new ArrayList<>();
public List<Temperature> cpuTemperatures = new ArrayList<>();
public List<Temperature> baseboardTemperatures = new ArrayList<>();
}

View file

@ -1,7 +1,8 @@
package biz.nellemann.hmci.pcm; package biz.nellemann.hmci.dto.json;
import com.serjltt.moshi.adapters.FirstElement; import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
@JsonIgnoreProperties({ "metricArrayOrder" })
public final class UtilInfo { public final class UtilInfo {
public String version = ""; public String version = "";
@ -13,7 +14,4 @@ public final class UtilInfo {
public String name = ""; public String name = "";
public String uuid = ""; public String uuid = "";
@FirstElement
public String metricArrayOrder = "";
} }

View file

@ -0,0 +1,30 @@
package biz.nellemann.hmci.dto.json;
import com.fasterxml.jackson.annotation.JsonAlias;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.util.ArrayList;
import java.util.List;
public final class UtilSample {
public String sampleType = "";
@JsonProperty("sampleInfo")
public SampleInfo sampleInfo = new SampleInfo();
public SampleInfo getInfo() {
return sampleInfo;
}
@JsonProperty("systemFirmwareUtil")
public SystemFirmware systemFirmwareUtil = new SystemFirmware();
public ServerUtil serverUtil = new ServerUtil();
public EnergyUtil energyUtil = new EnergyUtil();
public List<ViosUtil> viosUtil = new ArrayList<>();
public LparUtil lparsUtil = new LparUtil();
}

View file

@ -0,0 +1,7 @@
package biz.nellemann.hmci.dto.json;
public final class ViosMemory {
public double assignedMem;
public double utilizedMem;
public double virtualPersistentMem;
}

View file

@ -1,9 +1,9 @@
package biz.nellemann.hmci.pcm; package biz.nellemann.hmci.dto.json;
public final class ViosUtil { public final class ViosUtil {
public String id = ""; public int id;
public String uuid = ""; public String uuid;
public String name = ""; public String name = "";
public String state = ""; public String state = "";
public Integer affinityScore = 0; public Integer affinityScore = 0;

View file

@ -0,0 +1,30 @@
package biz.nellemann.hmci.dto.json;
/**
* Network adapter SEA
*/
public final class VirtualEthernetAdapter {
public String physicalLocation = "";
public Integer vlanId = 0;
public Integer vswitchId = 0;
public Boolean isPortVlanId = false;
public Integer viosId = 0;
public String sharedEthernetAdapterId = "";
public Double receivedPackets = 0.0;
public Double sentPackets = 0.0;
public Double droppedPackets = 0.0;
public Double sentBytes = 0.0;
public Double receivedBytes = 0.0;
public Double receivedPhysicalPackets = 0.0;
public Double sentPhysicalPackets = 0.0;
public Double droppedPhysicalPackets = 0.0;
public Double sentPhysicalBytes = 0.0;
public Double receivedPhysicalBytes = 0.0;
public Double transferredBytes = 0.0;
public Double transferredPhysicalBytes = 0.0;
}

View file

@ -0,0 +1,23 @@
package biz.nellemann.hmci.dto.json;
/**
* Storage adapter - NPIV ?
*/
public final class VirtualFiberChannelAdapter {
public String wwpn = "";
public String wwpn2 = "";
public String physicalLocation = "";
public String physicalPortWWPN = "";
public Integer viosId = 0;
public Double numOfReads = 0.0;
public Double numOfWrites = 0.0;
public Double readBytes = 0.0;
public Double writeBytes = 0.0;
public Double runningSpeed = 0.0;
public Double transmittedBytes = 0.0;
}

View file

@ -0,0 +1,12 @@
package biz.nellemann.hmci.dto.toml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import java.util.Map;
@JsonIgnoreProperties(ignoreUnknown = true)
public class Configuration {
public InfluxConfiguration influx;
public Map<String, HmcConfiguration> hmc;
}

View file

@ -0,0 +1,28 @@
package biz.nellemann.hmci.dto.toml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import java.util.ArrayList;
import java.util.List;
@JsonIgnoreProperties(ignoreUnknown = true)
public class HmcConfiguration {
public String url;
public String name;
public String username;
public String password;
public Integer refresh = 30;
public Integer discover = 120;
public String trace;
public Boolean energy = true;
public Boolean trust = true;
public List<String> excludeSystems = new ArrayList<>();
public List<String> includeSystems = new ArrayList<>();
public List<String> excludePartitions = new ArrayList<>();
public List<String> includePartitions = new ArrayList<>();
}

View file

@ -0,0 +1,17 @@
package biz.nellemann.hmci.dto.toml;
public class InfluxConfiguration {
public String url;
public String username;
public String password;
public String database;
/*public InfluxConfiguration(String url, String username, String password, String database) {
this.url = url;
this.username = username;
this.password = password;
this.database = database;
}*/
}

View file

@ -0,0 +1,16 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
@JsonIgnoreProperties({ "Atom", "ksv", "kxe", "kb", "schemaVersion", "" })
public class IFixDetail implements Serializable {
private static final long serialVersionUID = 1L;
@JsonProperty("IFix")
public String iFix;
}

View file

@ -0,0 +1,35 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
import java.io.Serializable;
@JsonIgnoreProperties(ignoreUnknown = true)
public class Link implements Serializable {
private static final long serialVersionUID = 1L;
@JacksonXmlProperty(isAttribute = true)
public String rel;
public String getRel() {
return rel;
}
@JacksonXmlProperty(isAttribute = true)
public String type;
public String getType() {
return type;
}
@JacksonXmlProperty(isAttribute = true)
public String href;
public String getHref() {
return href;
}
}

View file

@ -0,0 +1,55 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
/*
@JsonIgnoreProperties({
"ksv", "kxe", "kb", "schemaVersion", "Metadata", "AllowPerformanceDataCollection",
"AssociatedPartitionProfile", "AvailabilityPriority", "CurrentProcessorCompatibilityMode", "CurrentProfileSync",
"IsBootable", "IsConnectionMonitoringEnabled", "IsOperationInProgress", "IsRedundantErrorPathReportingEnabled",
"IsTimeReferencePartition", "IsVirtualServiceAttentionLEDOn", "IsVirtualTrustedPlatformModuleEnabled",
"KeylockPosition", "LogicalSerialNumber", "OperatingSystemVersion", "PartitionCapabilities", "PartitionID",
"PartitionIOConfiguration", "PartitionMemoryConfiguration", "PartitionProcessorConfiguration", "PartitionProfiles",
"PendingProcessorCompatibilityMode", "ProcessorPool", "ProgressPartitionDataRemaining", "ProgressPartitionDataTotal",
"ProgressState", "ResourceMonitoringControlState", "ResourceMonitoringIPAddress", "AssociatedManagedSystem",
"ClientNetworkAdapters", "HostEthernetAdapterLogicalPorts", "MACAddressPrefix", "IsServicePartition",
"PowerVMManagementCapable", "ReferenceCode", "AssignAllResources", "HardwareAcceleratorQoS", "LastActivatedProfile",
"HasPhysicalIO", "AllowPerformanceDataCollection", "PendingSecureBoot", "CurrentSecureBoot", "BootMode",
"PowerOnWithHypervisor", "Description", "MigrationStorageViosDataStatus", "MigrationStorageViosDataTimestamp",
"RemoteRestartCapable", "SimplifiedRemoteRestartCapable", "HasDedicatedProcessorsForMigration", "SuspendCapable",
"MigrationDisable", "MigrationState", "RemoteRestartState", "VirtualFibreChannelClientAdapters",
"VirtualSCSIClientAdapters", "BootListInformation"
})
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class LogicalPartitionEntry implements Serializable, ResourceEntry {
private static final long serialVersionUID = 1L;
@JsonProperty("PartitionID")
public Number partitionId;
@JsonProperty("PartitionName")
public String partitionName;
@JsonProperty("PartitionState")
public String partitionState;
@JsonProperty("PartitionType")
public String partitionType;
@JsonProperty("PartitionUUID")
public String partitionUUID;
@JsonProperty("OperatingSystemType")
public String operatingSystemType;
@Override
public String getName() {
return partitionName.trim();
}
}

View file

@ -0,0 +1,21 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
@JsonIgnoreProperties({ "schemaVersion", "Metadata" })
public class LogonResponse implements Serializable {
private static final long serialVersionUID = 1L;
@JsonProperty("X-API-Session")
private String token;
public String getToken() {
return token;
}
}

View file

@ -0,0 +1,46 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
import java.io.Serializable;
@JsonIgnoreProperties({ "kb", "kxe", "Metadata" })
public class MachineTypeModelAndSerialNumber implements Serializable {
private static final long serialVersionUID = 1L;
@JacksonXmlProperty(isAttribute = true)
private final String schemaVersion = "V1_0";
@JsonProperty("MachineType")
public String machineType;
public String getMachineType() {
return machineType;
}
@JsonProperty("Model")
public String model;
public String getModel() {
return model;
}
@JsonProperty("SerialNumber")
public String serialNumber;
public String getSerialNumber() {
return serialNumber;
}
public String getTypeAndModel() {
return machineType+"-"+model;
}
public String getTypeAndModelAndSerialNumber() {
return machineType+"-"+model+"-"+serialNumber;
}
}

View file

@ -0,0 +1,94 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonAlias;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
/*
@JsonIgnoreProperties({
"schemaVersion", "Metadata", "AssociatedIPLConfiguration", "AssociatedSystemCapabilities",
"AssociatedSystemIOConfiguration", "AssociatedSystemMemoryConfiguration", "AssociatedSystemProcessorConfiguration",
"AssociatedSystemSecurity", "DetailedState", "ManufacturingDefaultConfigurationEnabled", "MaximumPartitions",
"MaximumPowerControlPartitions", "MaximumRemoteRestartPartitions", "MaximumSharedProcessorCapablePartitionID",
"MaximumSuspendablePartitions", "MaximumBackingDevicesPerVNIC", "PhysicalSystemAttentionLEDState",
"PrimaryIPAddress", "ServiceProcessorFailoverEnabled", "ServiceProcessorFailoverReason", "ServiceProcessorFailoverState",
"ServiceProcessorVersion", "VirtualSystemAttentionLEDState", "SystemMigrationInformation", "ReferenceCode",
"MergedReferenceCode", "EnergyManagementConfiguration", "IsPowerVMManagementMaster", "IsClassicHMCManagement",
"IsPowerVMManagementWithoutMaster", "IsManagementPartitionPowerVMManagementMaster", "IsHMCPowerVMManagementMaster",
"IsNotPowerVMManagementMaster", "IsPowerVMManagementNormalMaster", "IsPowerVMManagementPersistentMaster",
"IsPowerVMManagementTemporaryMaster", "IsPowerVMManagementPartitionEnabled", "SupportedHardwareAcceleratorTypes",
"CurrentStealableProcUnits", "CurrentStealableMemory", "Description", "SystemLocation", "SystemType",
"ProcessorThrottling", "AssociatedPersistentMemoryConfiguration"
})*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class ManagedSystemEntry implements Serializable, ResourceEntry {
private static final long serialVersionUID = 1L;
@JsonProperty("State")
public String state;
@JsonProperty("Hostname")
public String hostname;
//@JsonAlias("ActivatedLevel")
@JsonProperty("ActivatedLevel")
public Integer activatedLevel;
public Integer getActivatedLevel() {
return activatedLevel;
}
@JsonAlias("ActivatedServicePackNameAndLevel")
public String activatedServicePackNameAndLevel;
public String getActivatedServicePackNameAndLevel() {
return activatedServicePackNameAndLevel;
}
@JsonAlias("SystemName")
public String systemName = "";
public String getSystemName() {
return systemName.trim();
}
@Override
public String getName() {
return systemName.trim();
}
@JsonProperty("SystemTime")
public Long systemTime;
@JsonProperty("SystemFirmware")
public String systemFirmware;
@JsonAlias("AssociatedLogicalPartitions")
public List<Link> associatedLogicalPartitions;
public List<Link> getAssociatedLogicalPartitions() {
return associatedLogicalPartitions != null ? associatedLogicalPartitions : new ArrayList<>();
}
@JsonAlias("AssociatedVirtualIOServers")
public List<Link> associatedVirtualIOServers;
public List<Link> getAssociatedVirtualIOServers() {
return associatedVirtualIOServers != null ? associatedVirtualIOServers : new ArrayList<>();
}
@JsonAlias("MachineTypeModelAndSerialNumber")
public MachineTypeModelAndSerialNumber machineTypeModelAndSerialNumber;
public MachineTypeModelAndSerialNumber getMachineTypeModelAndSerialNumber() {
return machineTypeModelAndSerialNumber;
}
}

View file

@ -0,0 +1,57 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
@JsonIgnoreProperties(ignoreUnknown = true)
@JacksonXmlRootElement(localName = "ManagedSystemPcmPreference:ManagedSystemPcmPreference")
public class ManagedSystemPcmPreference {
@JacksonXmlProperty(isAttribute = true)
private final String schemaVersion = "V1_0";
@JacksonXmlProperty(isAttribute = true, localName = "xmlns")
private final String xmlns = "http://www.ibm.com/xmlns/systems/power/firmware/pcm/mc/2012_10/";
@JacksonXmlProperty(isAttribute = true, localName = "xmlns:ManagedSystemPcmPreference")
private final String ns1 = "http://www.ibm.com/xmlns/systems/power/firmware/pcm/mc/2012_10/";
@JacksonXmlProperty(isAttribute = true, localName = "xmlns:ns2")
private final String ns2 = "http://www.w3.org/XML/1998/namespace/k2";
@JsonProperty("Metadata")
public Metadata metadata;
@JsonProperty("SystemName")
public String systemName;
@JsonProperty("MachineTypeModelSerialNumber")
public MachineTypeModelAndSerialNumber machineTypeModelSerialNumber;
@JsonProperty("EnergyMonitoringCapable")
public Boolean energyMonitoringCapable = false;
@JsonProperty("LongTermMonitorEnabled")
public Boolean longTermMonitorEnabled;
@JsonProperty("AggregationEnabled")
public Boolean aggregationEnabled;
@JsonProperty("ShortTermMonitorEnabled")
public Boolean shortTermMonitorEnabled;
// ksv ksv="V1_1_0"
//@JacksonXmlProperty(isAttribute = true)
//@JsonProperty("ComputeLTMEnabled")
//public Boolean computeLTMEnabled;
@JsonProperty("EnergyMonitorEnabled")
public Boolean energyMonitorEnabled = false;
@JsonProperty("AssociatedManagedSystem")
public Link associatedManagedSystem;
}

View file

@ -0,0 +1,99 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
/*
@JsonIgnoreProperties({
"schemaVersion", "Metadata", "NetworkInterfaces", "Driver", "LicenseID", "LicenseFirstYear", "UVMID",
"TemplateObjectModelVersion", "UserObjectModelVersion", "WebObjectModelVersion", "PublicSSHKeyValue",
"MinimumKeyStoreSize", "MinimumKeyStoreSize"
})*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class ManagementConsoleEntry implements Serializable, ResourceEntry {
private static final long serialVersionUID = 1L;
@JsonProperty("MachineTypeModelAndSerialNumber")
private MachineTypeModelAndSerialNumber machineTypeModelAndSerialNumber;
public MachineTypeModelAndSerialNumber getMachineTypeModelAndSerialNumber() {
return machineTypeModelAndSerialNumber;
}
@JsonProperty("ManagedSystems")
protected List<Link> associatedManagedSystems;
public List<Link> getAssociatedManagedSystems() {
// TODO: Security - Return new array, so receiver cannot modify ours.
return new ArrayList<>(associatedManagedSystems);
}
@JsonProperty("ManagementConsoleName")
public String managementConsoleName;
@Override
public String getName() {
return managementConsoleName.replace("\n", "").trim();
}
@JsonProperty("VersionInfo")
public VersionInfo versionInfo;
@JsonProperty("BIOS")
protected String bios;
@JsonProperty("BaseVersion")
protected String baseVersion;
public String getBaseVersion() {
return baseVersion;
}
@JsonProperty("IFixDetails")
public IFixDetails iFixDetails;
@JsonIgnoreProperties({ "ksv", "kxe", "kb", "schemaVersion", "Metadata" })
static class IFixDetails {
@JsonProperty("IFixDetail")
public List<IFixDetail> iFixDetailList;
}
@JsonProperty("ProcConfiguration")
public ProcConfiguration procConfiguration;
@JsonIgnoreProperties({ "ksv", "kxe", "kb", "schemaVersion", "Metadata", "Atom" })
static class ProcConfiguration {
@JsonProperty("NumberOfProcessors")
public Integer numberOfProcessors;
@JsonProperty("ModelName")
public String modelName;
@JsonProperty("Architecture")
public String architecture;
}
@JsonProperty("MemConfiguration")
public MemConfiguration memConfiguration;
@JsonIgnoreProperties({ "ksv", "kxe", "kb", "schemaVersion", "Metadata", "Atom" })
static class MemConfiguration {
@JsonProperty("TotalMemory")
public Integer totalMemory;
@JsonProperty("TotalSwapMemory")
public Integer totalSwapMemory;
}
}

View file

@ -0,0 +1,21 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
@JsonIgnoreProperties(ignoreUnknown = true)
public class Metadata {
@JsonProperty("Atom")
public Atom atom;
@JsonIgnoreProperties(ignoreUnknown = true)
public class Atom {
@JsonProperty("AtomID")
public String atomID;
@JsonProperty("AtomCreated")
public String atomCreated;
}
}

View file

@ -0,0 +1,6 @@
package biz.nellemann.hmci.dto.xml;
public interface ResourceEntry {
String getName();
}

View file

@ -0,0 +1,32 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
@JsonIgnoreProperties({ "kxe", "kb", "schemaVersion", "Metadata" })
public class VersionInfo implements Serializable {
private static final long serialVersionUID = 1L;
@JsonProperty("BuildLevel")
public String buildLevel;
@JsonProperty("Maintenance")
protected String maintenance;
@JsonProperty("Minor")
protected String minor;
@JsonProperty("Release")
protected String release;
@JsonProperty("ServicePackName")
public String servicePackName;
@JsonProperty("Version")
protected String version;
}

View file

@ -0,0 +1,26 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
@JsonIgnoreProperties(ignoreUnknown = true)
public class VirtualIOServerEntry implements Serializable, ResourceEntry {
private static final long serialVersionUID = 1L;
@JsonProperty("PartitionName")
private String partitionName;
public String getPartitionName() {
return partitionName;
}
@Override
public String getName() {
return partitionName;
}
}

View file

@ -0,0 +1,110 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonAlias;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.io.Serializable;
import java.util.List;
//@JsonIgnoreProperties({ "author", "etag" })
@JsonIgnoreProperties(ignoreUnknown = true)
public class XmlEntry implements Serializable {
private static final long serialVersionUID = 1L;
public String id; // 2c6b6620-e3e3-3294-aaf5-38e546ff672b
public String title; // ManagementConsole
public String published; // 2021-11-09T21:13:40.467+01:00
public Category category;
@JsonIgnoreProperties(ignoreUnknown = true)
public class Category {
public String term;
}
@JsonProperty("link")
public Link link;
//public List<Link> links;
/*public List<Link> getLinks() {
return links;
}
*/
public Content content;
public Content getContent() {
return content;
}
public boolean hasContent() {
return content != null;
}
@JsonIgnoreProperties({ "type" })
public static class Content {
@JsonProperty("ManagementConsole")
private ManagementConsoleEntry managementConsoleEntry;
public ManagementConsoleEntry getManagementConsole() {
return managementConsoleEntry;
}
public boolean isManagementConsole() {
return managementConsoleEntry != null;
}
@JsonProperty("ManagedSystem")
private ManagedSystemEntry managedSystemEntry;
public ManagedSystemEntry getManagedSystemEntry() {
return managedSystemEntry;
}
public boolean isManagedSystem() {
return managedSystemEntry != null;
}
@JsonProperty("ManagedSystemPcmPreference")
private ManagedSystemPcmPreference managedSystemPcmPreference;
public ManagedSystemPcmPreference getManagedSystemPcmPreference() {
return managedSystemPcmPreference;
}
public boolean isManagedSystemPcmPreference() {
return managedSystemPcmPreference != null;
}
@JsonAlias("VirtualIOServer")
private VirtualIOServerEntry virtualIOServerEntry;
public VirtualIOServerEntry getVirtualIOServerEntry() {
return virtualIOServerEntry;
}
public boolean isVirtualIOServer() {
return virtualIOServerEntry != null;
}
@JsonAlias("LogicalPartition")
private LogicalPartitionEntry logicalPartitionEntry;
public LogicalPartitionEntry getLogicalPartitionEntry() {
return logicalPartitionEntry;
}
public boolean isLogicalPartition() {
return logicalPartitionEntry != null;
}
}
}

View file

@ -0,0 +1,35 @@
package biz.nellemann.hmci.dto.xml;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlElementWrapper;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
//@JsonIgnoreProperties({ "link" })
@JsonIgnoreProperties(ignoreUnknown = true)
public class XmlFeed implements Serializable {
private static final long serialVersionUID = 1L;
public String id; // 347ecfcf-acac-3724-8915-a3d7d7a6f298
public String updated; // 2021-11-09T21:13:39.591+01:00
public String generator; // IBM Power Systems Management Console
@JsonProperty("link")
@JacksonXmlElementWrapper(useWrapping = false)
public List<Link> links; // <link rel="SELF" href="https://10.32.64.39:12443/rest/api/uom/ManagementConsole"/>
@JsonProperty("entry")
@JacksonXmlElementWrapper(useWrapping = false)
public List<XmlEntry> entries;
public XmlEntry getEntry() {
return entries.size() > 0 ? entries.get(0) : null;
}
}

View file

@ -1,6 +0,0 @@
package biz.nellemann.hmci.pcm;
public final class EnergyUtil {
public final PowerUtil powerUtil = new PowerUtil();
public final ThermalUtil thermalUtil = new ThermalUtil();
}

View file

@ -1,30 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class FiberChannelAdapter {
public String id = "";
public String wwpn = "";
public String physicalLocation = "";
public Integer numOfPorts = 0;
@FirstElement
public Number numOfReads = 0.0;
@FirstElement
public Number numOfWrites = 0.0;
@FirstElement
public Number readBytes = 0.0;
@FirstElement
public Number writeBytes = 0.0;
@FirstElement
public Number runningSpeed = 0.0;
@FirstElement
public Number transmittedBytes = 0.0;
}

View file

@ -1,29 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class GenericAdapter {
public String id = "";
public String type = "";
public String physicalLocation = "";
@FirstElement
public Number receivedPackets = 0.0;
@FirstElement
public Number sentPackets = 0.0;
@FirstElement
public Number droppedPackets = 0.0;
@FirstElement
public Number sentBytes = 0.0;
@FirstElement
public Number receivedBytes = 0.0;
@FirstElement
public Number transferredBytes = 0.0;
}

View file

@ -1,26 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class GenericPhysicalAdapters {
public String id = "";
public String type = "";
public String physicalLocation = "";
@FirstElement
public Number numOfReads = 0.0;
@FirstElement
public Number numOfWrites = 0.0;
@FirstElement
public Number readBytes = 0.0;
@FirstElement
public Number writeBytes = 0.0;
@FirstElement
public Number transmittedBytes = 0.0;
}

View file

@ -1,28 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class GenericVirtualAdapter {
public String id = "";
public String type = "";
public Integer viosId = 0;
public String physicalLocation = "";
@FirstElement
public Number numOfReads = 0.0;
@FirstElement
public Number numOfWrites = 0.0;
@FirstElement
public Number readBytes = 0.0;
@FirstElement
public Number writeBytes = 0.0;
@FirstElement
public Number transmittedBytes = 0.0;
}

View file

@ -1,16 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class LparMemory {
@FirstElement
public Number logicalMem = 0.0;
@FirstElement
public Number utilizedMem = 0.0;
@FirstElement
public Number backedPhysicalMem = 0.0;
}

View file

@ -1,44 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class LparProcessor {
public Integer poolId = 0;
public Integer weight = 0;
public String mode = "";
@FirstElement
public Number maxVirtualProcessors = 0.0;
@FirstElement
public Number currentVirtualProcessors = 0.0;
@FirstElement
public Number maxProcUnits = 0.0;
@FirstElement
public Number entitledProcUnits = 0.0;
@FirstElement
public Number utilizedProcUnits = 0.0;
@FirstElement
public Number utilizedCappedProcUnits = 0.0;
@FirstElement
public Number utilizedUncappedProcUnits = 0.0;
@FirstElement
public Number idleProcUnits = 0.0;
@FirstElement
public Number donatedProcUnits = 0.0;
@FirstElement
public Number timeSpentWaitingForDispatch = 0.0;
@FirstElement
public Number timePerInstructionExecution = 0.0;
}

View file

@ -1,15 +0,0 @@
package biz.nellemann.hmci.pcm;
import java.util.ArrayList;
import java.util.List;
public final class Network {
public final List<String> clientLpars = new ArrayList<>();
public final List<GenericAdapter> genericAdapters = new ArrayList<>();
public final List<SharedAdapter> sharedAdapters = new ArrayList<>();
public final List<VirtualEthernetAdapter> virtualEthernetAdapters = new ArrayList<>();
public final List<SriovLogicalPort> sriovLogicalPorts = new ArrayList<>();
}

View file

@ -1,7 +0,0 @@
package biz.nellemann.hmci.pcm;
public final class PcmData {
public final SystemUtil systemUtil = new SystemUtil();
}

View file

@ -1,22 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class PhysicalProcessorPool {
@FirstElement
public Number assignedProcUnits = 0.0;
@FirstElement
public Number utilizedProcUnits = 0.0;
@FirstElement
public Number availableProcUnits = 0.0;
@FirstElement
public Number configuredProcUnits = 0.0;
@FirstElement
public Number borrowedProcUnits = 0.0;
}

View file

@ -1,11 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class PowerUtil {
@FirstElement
public Number powerReading = 0.0;
}

View file

@ -1,8 +0,0 @@
package biz.nellemann.hmci.pcm;
public final class SampleInfo {
public String timeStamp = "";
public Integer status = 0;
}

View file

@ -1,22 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class ServerMemory {
@FirstElement
public Number totalMem = 0.0;
@FirstElement
public Number availableMem = 0.0;
@FirstElement
public Number configurableMem = 0.0;
@FirstElement
public Number assignedMemToLpars = 0.0;
@FirstElement
public Number virtualPersistentMem = 0.0;
}

View file

@ -1,19 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class ServerProcessor {
@FirstElement
public Number totalProcUnits = 0.0;
@FirstElement
public Number utilizedProcUnits = 0.0;
@FirstElement
public Number availableProcUnits = 0.0;
@FirstElement
public Number configurableProcUnits = 0.0;
}

View file

@ -1,13 +0,0 @@
package biz.nellemann.hmci.pcm;
import java.util.ArrayList;
import java.util.List;
public final class ServerUtil {
public final ServerProcessor processor = new ServerProcessor();
public final ServerMemory memory = new ServerMemory();
public final PhysicalProcessorPool physicalProcessorPool = new PhysicalProcessorPool();
public final List<SharedProcessorPool> sharedProcessorPool = new ArrayList<>();
}

View file

@ -1,32 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class SharedAdapter {
public String id = "";
public String type = "";
public String physicalLocation = "";
@FirstElement
public Number receivedPackets = 0.0;
@FirstElement
public Number sentPackets = 0.0;
@FirstElement
public Number droppedPackets = 0.0;
@FirstElement
public Number sentBytes = 0.0;
@FirstElement
public Number receivedBytes = 0.0;
@FirstElement
public Number transferredBytes = 0.0;
@FirstElement
public String bridgedAdapters = "";
}

View file

@ -1,25 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class SharedProcessorPool {
public String id = "";
public String name = "";
@FirstElement
public Number assignedProcUnits = 0.0;
@FirstElement
public Number utilizedProcUnits = 0.0;
@FirstElement
public Number availableProcUnits = 0.0;
@FirstElement
public Number configuredProcUnits = 0.0;
@FirstElement
public Number borrowedProcUnits = 0.0;
}

View file

@ -1,14 +0,0 @@
package biz.nellemann.hmci.pcm;
import java.util.ArrayList;
import java.util.List;
public final class Storage {
public final List<String> clientLpars = new ArrayList<>();
public final List<GenericPhysicalAdapters> genericPhysicalAdapters = new ArrayList<>();
public final List<GenericVirtualAdapter> genericVirtualAdapters = new ArrayList<>();
public final List<FiberChannelAdapter> fiberChannelAdapters = new ArrayList<>();
public final List<VirtualFiberChannelAdapter> virtualFiberChannelAdapters = new ArrayList<>();
}

View file

@ -1,13 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class SystemFirmware {
@FirstElement
public Number utilizedProcUnits = 0.0;
@FirstElement
public Number assignedMem = 0.0;
}

View file

@ -1,14 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
import com.squareup.moshi.Json;
public final class SystemUtil {
public UtilInfo utilInfo;
@FirstElement
@Json(name = "utilSamples")
public UtilSample sample;
}

View file

@ -1,12 +0,0 @@
package biz.nellemann.hmci.pcm;
import java.util.ArrayList;
import java.util.List;
public final class ThermalUtil {
public final List<Temperature> inletTemperatures = new ArrayList<>();
public final List<Temperature> cpuTemperatures = new ArrayList<>();
public final List<Temperature> baseboardTemperatures = new ArrayList<>();
}

View file

@ -1,20 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
import java.util.ArrayList;
import java.util.List;
public final class UtilSample {
public String sampleType = "";
public final SampleInfo sampleInfo = new SampleInfo();
public final SystemFirmware systemFirmwareUtil = new SystemFirmware();
public final ServerUtil serverUtil = new ServerUtil();
public final EnergyUtil energyUtil = new EnergyUtil();
public final List<ViosUtil> viosUtil = new ArrayList<>();
@FirstElement
public final LparUtil lparsUtil = new LparUtil();
}

View file

@ -1,13 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class ViosMemory {
@FirstElement
public Number assignedMem = 0.0;
@FirstElement
public Number utilizedMem = 0.0;
}

View file

@ -1,51 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class VirtualEthernetAdapter {
public String physicalLocation = "";
public Integer vlanId = 0;
public Integer vswitchId = 0;
public Boolean isPortVlanId = false;
public Integer viosId = 0;
public String sharedEthernetAdapterId = "";
@FirstElement
public Number receivedPackets = 0.0;
@FirstElement
public Number sentPackets = 0.0;
@FirstElement
public Number droppedPackets = 0.0;
@FirstElement
public Number sentBytes = 0.0;
@FirstElement
public Number receivedBytes = 0.0;
@FirstElement
public Number receivedPhysicalPackets = 0.0;
@FirstElement
public Number sentPhysicalPackets = 0.0;
@FirstElement
public Number droppedPhysicalPackets = 0.0;
@FirstElement
public Number sentPhysicalBytes = 0.0;
@FirstElement
public Number receivedPhysicalBytes = 0.0;
@FirstElement
public Number transferredBytes = 0.0;
@FirstElement
public Number transferredPhysicalBytes = 0.0;
}

View file

@ -1,31 +0,0 @@
package biz.nellemann.hmci.pcm;
import com.serjltt.moshi.adapters.FirstElement;
public final class VirtualFiberChannelAdapter {
public String wwpn = "";
public String wwpn2 = "";
public String physicalLocation = "";
public String physicalPortWWPN = "";
public Integer viosId = 0;
@FirstElement
public Number numOfReads = 0.0;
@FirstElement
public Number numOfWrites = 0.0;
@FirstElement
public Number readBytes = 0.0;
@FirstElement
public Number writeBytes = 0.0;
@FirstElement
public Number runningSpeed = 0.0;
@FirstElement
public Number transmittedBytes = 0.0;
}

View file

@ -1,5 +1,8 @@
package biz.nellemann.hmci package biz.nellemann.hmci
import biz.nellemann.hmci.dto.toml.Configuration
import biz.nellemann.hmci.dto.toml.HmcConfiguration
import com.fasterxml.jackson.dataformat.toml.TomlMapper
import spock.lang.Specification import spock.lang.Specification
import java.nio.file.Path import java.nio.file.Path
@ -10,36 +13,52 @@ class ConfigurationTest extends Specification {
Path testConfigurationFile = Paths.get(getClass().getResource('/hmci.toml').toURI()) Path testConfigurationFile = Paths.get(getClass().getResource('/hmci.toml').toURI())
TomlMapper mapper
def setup() {
mapper = new TomlMapper();
}
def cleanup() {
}
void "test parsing of configuration file"() { void "test parsing of configuration file"() {
when: when:
Configuration conf = new Configuration(testConfigurationFile) Configuration conf = mapper.readerFor(Configuration.class).readValue(testConfigurationFile.toFile())
println(conf.hmc.entrySet().forEach((e) -> {
println((String)e.key + " -> " + e);
HmcConfiguration c = e.value;
println(c.url);
}));
then: then:
conf != null conf != null
}
void "test HMC energy flag, default setting"() {
when:
Configuration conf = mapper.readerFor(Configuration.class).readValue(testConfigurationFile.toFile())
then:
!conf.hmc.get("site1").energy
} }
void "test energy flag, default setting"() { void "test HMC exclude and include options"() {
when: when:
Configuration conf = new Configuration(testConfigurationFile) Configuration conf = mapper.readerFor(Configuration.class).readValue(testConfigurationFile.toFile())
then: then:
!conf.getHmc().get(0).energy conf.hmc.get("site1").excludeSystems.contains("notThisSys")
conf.hmc.get("site1").includeSystems.contains("onlyThisSys")
} conf.hmc.get("site1").excludePartitions.contains("notThisPartition")
conf.hmc.get("site1").includePartitions.contains("onlyThisPartition")
void "test exclude and include options"() {
when:
Configuration conf = new Configuration(testConfigurationFile)
then:
conf.getHmc().get(0).excludeSystems.contains("notThisSys")
conf.getHmc().get(0).includeSystems.contains("onlyThisSys")
conf.getHmc().get(0).excludePartitions.contains("notThisPartition")
conf.getHmc().get(0).includePartitions.contains("onlyThisPartition")
} }

View file

@ -1,102 +0,0 @@
package biz.nellemann.hmci
import okhttp3.mockwebserver.MockResponse
import okhttp3.mockwebserver.MockWebServer
import spock.lang.Specification
class HmcRestClientTest extends Specification {
HmcRestClient hmc
MockWebServer mockServer = new MockWebServer()
def setup() {
mockServer.start()
hmc = new HmcRestClient(mockServer.url("/").toString(), "testUser", "testPassword", true)
hmc.authToken = "blaBla"
}
def cleanup() {
mockServer.shutdown()
}
void "test against empty xml"() {
setup:
def testXml = ""
mockServer.enqueue(new MockResponse().setBody(testXml))
when:
Map<String, ManagedSystem> systems = hmc.getManagedSystems()
then:
systems.size() == 0
}
void "test getManagedSystems"() {
setup:
def testFile = new File(getClass().getResource('/managed-systems.xml').toURI())
def testXml = testFile.getText('UTF-8')
mockServer.enqueue(new MockResponse().setBody(testXml))
when:
Map<String, ManagedSystem> systems = hmc.getManagedSystems()
then:
systems.size() == 2
systems.get("e09834d1-c930-3883-bdad-405d8e26e166").name == "S822L-8247-213C1BA"
}
void "test getLogicalPartitionsForManagedSystem"() {
setup:
def testFile = new File(getClass().getResource('/logical-partitions.xml').toURI())
def testXml = testFile.getText('UTF-8')
mockServer.enqueue(new MockResponse().setBody(testXml))
when:
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
Map<String, LogicalPartition> partitions = hmc.getLogicalPartitionsForManagedSystem(system)
then:
partitions.size() == 12
partitions.get("3380A831-9D22-4F03-A1DF-18B249F0FF8E").name == "AIX_Test1-e0f725f0-00000005"
partitions.get("3380A831-9D22-4F03-A1DF-18B249F0FF8E").type == "AIX/Linux"
}
void "test getBody with JSON for ManagedSystem"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-managed-system.json').toURI())
def testJson = testFile.getText('UTF-8')
mockServer.enqueue(new MockResponse().setBody(testJson))
when:
String jsonString = hmc.sendGetRequest(new URL(mockServer.url("/rest/api/pcm/ProcessedMetrics/ManagedSystem_e09834d1-c930-3883-bdad-405d8e26e166_20200807T122600+0200_20200807T122600+0200_30.json") as String))
then:
jsonString.contains('"uuid": "e09834d1-c930-3883-bdad-405d8e26e166"')
}
void "test getBody with JSON for LogicalPartition"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI())
def testJson = testFile.getText('UTF-8')
mockServer.enqueue(new MockResponse().setBody(testJson))
when:
String jsonString = hmc.sendGetRequest(new URL(mockServer.url("/rest/api/pcm/ProcessedMetrics/LogicalPartition_2DE05DB6-8AD5-448F-8327-0F488D287E82_20200807T123730+0200_20200807T123730+0200_30.json") as String))
then:
jsonString.contains('"uuid": "b597e4da-2aab-3f52-8616-341d62153559"')
}
// getPcmDataForManagedSystem
// getPcmDataForLogicalPartition
}

View file

@ -1,5 +1,6 @@
package biz.nellemann.hmci package biz.nellemann.hmci
import biz.nellemann.hmci.dto.toml.InfluxConfiguration
import spock.lang.Ignore import spock.lang.Ignore
import spock.lang.Specification import spock.lang.Specification
@ -9,7 +10,7 @@ class InfluxClientTest extends Specification {
InfluxClient influxClient InfluxClient influxClient
def setup() { def setup() {
influxClient = new InfluxClient(new Configuration.InfluxObject("http://localhost:8086", "root", "", "hmci")) influxClient = new InfluxClient(new InfluxConfiguration("http://localhost:8086", "root", "", "hmci"))
influxClient.login() influxClient.login()
} }

View file

@ -1,101 +1,145 @@
package biz.nellemann.hmci package biz.nellemann.hmci
import biz.nellemann.hmci.dto.xml.LogicalPartitionEntry
import org.mockserver.integration.ClientAndServer
import org.mockserver.logging.MockServerLogger
import org.mockserver.socket.PortFactory
import org.mockserver.socket.tls.KeyStoreFactory
import spock.lang.Shared
import spock.lang.Specification import spock.lang.Specification
import javax.net.ssl.HttpsURLConnection
class LogicalPartitionTest extends Specification { class LogicalPartitionTest extends Specification {
@Shared
private static ClientAndServer mockServer;
void "test processPcmJson for LogicalPartition"() { @Shared
private RestClient serviceClient
setup: @Shared
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI()) private ManagedSystem managedSystem
def testJson = testFile.getText('UTF-8')
when: @Shared
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N") private LogicalPartition logicalPartition
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
lpar.processMetrics(testJson)
then: @Shared
lpar.metrics.systemUtil.sample.lparsUtil.memory.logicalMem == 8192.000 private File metricsFile
lpar.metrics.systemUtil.sample.lparsUtil.processor.utilizedProcUnits == 0.001
lpar.metrics.systemUtil.sample.lparsUtil.network.virtualEthernetAdapters.first().receivedBytes == 276.467
def setupSpec() {
HttpsURLConnection.setDefaultSSLSocketFactory(new KeyStoreFactory(new MockServerLogger()).sslContext().getSocketFactory());
mockServer = ClientAndServer.startClientAndServer(PortFactory.findFreePort());
serviceClient = new RestClient(String.format("http://localhost:%d", mockServer.getPort()), "user", "password", false)
MockResponses.prepareClientResponseForLogin(mockServer)
MockResponses.prepareClientResponseForManagedSystem(mockServer)
MockResponses.prepareClientResponseForVirtualIOServer(mockServer)
MockResponses.prepareClientResponseForLogicalPartition(mockServer)
serviceClient.login()
managedSystem = new ManagedSystem(serviceClient, String.format("%s/rest/api/uom/ManagementConsole/2c6b6620-e3e3-3294-aaf5-38e546ff672b/ManagedSystem/b597e4da-2aab-3f52-8616-341d62153559", serviceClient.baseUrl));
managedSystem.discover()
logicalPartition = managedSystem.logicalPartitions.first()
logicalPartition.refresh()
metricsFile = new File("src/test/resources/3-logical-partition-perf-data.json")
} }
def cleanupSpec() {
mockServer.stop()
}
def setup() {
}
def "check that we found 2 logical partitions"() {
expect:
managedSystem.logicalPartitions.size() == 18
}
def "check name of 1st virtual server"() {
when:
LogicalPartitionEntry entry = logicalPartition.entry
then:
entry.getName() == "rhel8-ocp-helper"
}
void "process metrics data"() {
when:
logicalPartition.deserialize(metricsFile.getText('UTF-8'))
then:
logicalPartition.metric != null
}
void "test basic metrics"() {
when:
logicalPartition.deserialize(metricsFile.getText('UTF-8'))
then:
logicalPartition.metric.getSample().lparsUtil.memory.logicalMem == 8192.000
logicalPartition.metric.getSample().lparsUtil.processor.utilizedProcUnits == 0.001
logicalPartition.metric.getSample().lparsUtil.network.virtualEthernetAdapters.first().receivedBytes == 276.467
}
void "test getDetails"() { void "test getDetails"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI())
def testJson = testFile.getText('UTF-8')
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
when: when:
lpar.processMetrics(testJson) logicalPartition.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = lpar.getDetails() List<Measurement> listOfMeasurements = logicalPartition.getDetails()
then: then:
listOfMeasurements.size() == 1 listOfMeasurements.size() == 1
listOfMeasurements.first().fields['affinityScore'] == 100.0 listOfMeasurements.first().fields['affinityScore'] == 100.0
listOfMeasurements.first().fields['osType'] == 'Linux' listOfMeasurements.first().fields['osType'] == 'Linux'
listOfMeasurements.first().fields['type'] == 'AIX/Linux' listOfMeasurements.first().fields['type'] == 'AIX/Linux'
listOfMeasurements.first().tags['lparname'] == '9Flash01' listOfMeasurements.first().tags['lparname'] == 'rhel8-ocp-helper'
} }
void "test getMemoryMetrics"() { void "test getMemoryMetrics"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI())
def testJson = testFile.getText('UTF-8')
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
when: when:
lpar.processMetrics(testJson) logicalPartition.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = lpar.getMemoryMetrics() List<Measurement> listOfMeasurements = logicalPartition.getMemoryMetrics()
then: then:
listOfMeasurements.size() == 1 listOfMeasurements.size() == 1
listOfMeasurements.first().fields['logicalMem'] == 8192.000 listOfMeasurements.first().fields['logicalMem'] == 8192.000
listOfMeasurements.first().tags['lparname'] == '9Flash01' listOfMeasurements.first().tags['lparname'] == 'rhel8-ocp-helper'
} }
void "test getProcessorMetrics"() { void "test getProcessorMetrics"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI())
def testJson = testFile.getText('UTF-8')
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
when: when:
lpar.processMetrics(testJson) logicalPartition.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = lpar.getProcessorMetrics() List<Measurement> listOfMeasurements = logicalPartition.getProcessorMetrics()
then: then:
listOfMeasurements.size() == 1 listOfMeasurements.size() == 1
listOfMeasurements.first().fields['utilizedProcUnits'] == 0.001 listOfMeasurements.first().fields['utilizedProcUnits'] == 0.001
listOfMeasurements.first().tags['lparname'] == '9Flash01' listOfMeasurements.first().tags['lparname'] == 'rhel8-ocp-helper'
} }
void "test getVirtualEthernetAdapterMetrics"() { void "test getVirtualEthernetAdapterMetrics"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI())
def testJson = testFile.getText('UTF-8')
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
when: when:
lpar.processMetrics(testJson) logicalPartition.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = lpar.getVirtualEthernetAdapterMetrics() List<Measurement> listOfMeasurements = logicalPartition.getVirtualEthernetAdapterMetrics()
then: then:
listOfMeasurements.size() == 1 listOfMeasurements.size() == 1
@ -103,17 +147,12 @@ class LogicalPartitionTest extends Specification {
listOfMeasurements.first().tags['location'] == 'U9009.42A.21F64EV-V13-C32' listOfMeasurements.first().tags['location'] == 'U9009.42A.21F64EV-V13-C32'
} }
void "test getVirtualFiberChannelAdaptersMetrics"() { void "test getVirtualFiberChannelAdaptersMetrics"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI())
def testJson = testFile.getText('UTF-8')
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
when: when:
lpar.processMetrics(testJson) logicalPartition.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = lpar.getVirtualFibreChannelAdapterMetrics() List<Measurement> listOfMeasurements = logicalPartition.getVirtualFibreChannelAdapterMetrics()
then: then:
listOfMeasurements.size() == 4 listOfMeasurements.size() == 4
@ -122,48 +161,16 @@ class LogicalPartitionTest extends Specification {
} }
void "test getVirtualGenericAdapterMetrics"() { void "test getVirtualGenericAdapterMetrics"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition.json').toURI())
def testJson = testFile.getText('UTF-8')
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
when: when:
lpar.processMetrics(testJson) logicalPartition.deserialize(metricsFile.getText('UTF-8'))
List<Measurement> listOfMeasurements = lpar.getVirtualGenericAdapterMetrics() List<Measurement> listOfMeasurements = logicalPartition.getVirtualGenericAdapterMetrics()
then: then:
listOfMeasurements.size() == 1 listOfMeasurements.size() == 1
listOfMeasurements.first().fields['readBytes'] == 0.0 listOfMeasurements.first().fields['readBytes'] == 0.0
} }
void "test getSriovLogicalPortMetrics'"() {
setup:
def testFile = new File(getClass().getResource('/pcm-data-logical-partition-sriov.json').toURI())
def testJson = testFile.getText('UTF-8')
ManagedSystem system = new ManagedSystem("e09834d1-c930-3883-bdad-405d8e26e166", "Test Name","Test Type", "Test Model", "Test S/N")
LogicalPartition lpar = new LogicalPartition("2DE05DB6-8AD5-448F-8327-0F488D287E82", "9Flash01", "OS400", system)
when:
lpar.processMetrics(testJson)
List<Measurement> listOfMeasurements = lpar.getSriovLogicalPorts()
then:
listOfMeasurements.size() == 6
listOfMeasurements.first().tags['location'] == "U78CA.001.CSS0CXA-P1-C2-C1-T1-S2"
listOfMeasurements.first().tags['vnicDeviceMode'] == "NonVNIC"
listOfMeasurements.first().tags['configurationType'] == "Ethernet"
listOfMeasurements.first().fields['drcIndex'] == "654327810"
listOfMeasurements.first().fields['physicalPortId'] == 0
listOfMeasurements.first().fields['physicalDrcIndex'] == "553713681"
listOfMeasurements.first().fields['receivedPackets'] == 16.867
listOfMeasurements.first().fields['sentPackets'] == 0.067
listOfMeasurements.first().fields['sentBytes'] == 8.533
listOfMeasurements.first().fields['receivedBytes'] == 1032.933
listOfMeasurements.first().fields['transferredBytes'] == 1041.466
}
} }

View file

@ -0,0 +1,28 @@
package biz.nellemann.hmci
import biz.nellemann.hmci.dto.xml.ManagedSystemEntry
import biz.nellemann.hmci.dto.xml.XmlEntry
import com.fasterxml.jackson.dataformat.xml.XmlMapper
import spock.lang.Specification
class ManagedSystemEntryTest extends Specification {
void "parsing hmc xml managed system"() {
setup:
def testFile = new File(getClass().getResource('/2-managed-system.xml').toURI())
XmlMapper xmlMapper = new XmlMapper();
when:
XmlEntry entry = xmlMapper.readValue(testFile, XmlEntry.class);
ManagedSystemEntry managedSystem = entry.getContent().getManagedSystemEntry()
then:
managedSystem != null
managedSystem.activatedLevel == 145
managedSystem.activatedServicePackNameAndLevel == "FW930.50 (145)"
}
}

Some files were not shown because too many files have changed in this diff Show more