Initial implementation of issue #3.
This commit is contained in:
parent
4402e7094b
commit
326b913e57
21
README.md
21
README.md
|
@ -45,7 +45,7 @@ If you do not enable *Performance Monitoring Data Collection for Managed Servers
|
||||||
|
|
||||||
### 2 - InfluxDB and Grafana Installation
|
### 2 - InfluxDB and Grafana Installation
|
||||||
|
|
||||||
Install InfluxDB (v. **1.8** for best compatibility with Grafana) on an LPAR or VM, which is network accessible by the *HMCi* utility (the default InfluxDB port is 8086). You can install Grafana on the same server or any server which are able to connect to the InfluxDB database. The Grafana installation needs to be accessible from your browser. The default settings for both InfluxDB and Grafana will work fine as a start.
|
Install InfluxDB (v. **1.8** for best compatibility with Grafana) on an LPAR or VM, which is network accessible by the *HMCi* utility (the default InfluxDB port is 8086). You can install Grafana on the same server or any server which are able to connect to the InfluxDB database. The Grafana installation needs to be accessible from your browser (default on port 3000). The default settings for both InfluxDB and Grafana will work fine as a start.
|
||||||
|
|
||||||
- You can download [Grafana ppc64le](https://www.power-devops.com/grafana) and [InfluxDB ppc64le](https://www.power-devops.com/influxdb) packages for most Linux distributions and AIX on the [Power DevOps](https://www.power-devops.com/) site.
|
- You can download [Grafana ppc64le](https://www.power-devops.com/grafana) and [InfluxDB ppc64le](https://www.power-devops.com/influxdb) packages for most Linux distributions and AIX on the [Power DevOps](https://www.power-devops.com/) site.
|
||||||
- Binaries for amd64/x86 are available from the [Grafana website](https://grafana.com/grafana/download) and [InfluxDB website](https://portal.influxdata.com/downloads/) and most likely directly from your Linux distributions repositories.
|
- Binaries for amd64/x86 are available from the [Grafana website](https://grafana.com/grafana/download) and [InfluxDB website](https://portal.influxdata.com/downloads/) and most likely directly from your Linux distributions repositories.
|
||||||
|
@ -78,11 +78,24 @@ Install *HMCi* on a host, which can connect to the Power HMC through HTTPS, and
|
||||||
|
|
||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
|
### No data (or past/future data) shown in Grafana
|
||||||
|
|
||||||
|
This is most likely due to timezone, date and/or NTP not being configured correctly on the HMC and/or host running HMCi.
|
||||||
|
|
||||||
|
Example showing how you configure related settings through the HMC CLI:
|
||||||
|
```shell
|
||||||
|
chhmc -c xntp -s enable # Enable the NTP service
|
||||||
|
chhmc -c xntp -s add -a IP_Addr # Add a remote NTP server
|
||||||
|
chhmc -c date -s modify --timezone Europe/Copenhagen # Configure your timezone
|
||||||
|
chhmc -c date -s modify --datetime 01301615 # Set current date/time: MMDDhhmm[[CC]YY][.ss]
|
||||||
|
```
|
||||||
|
Remember to reboot your HMC after changing the timezone.
|
||||||
|
|
||||||
### Compatibility with nextract Plus
|
### Compatibility with nextract Plus
|
||||||
|
|
||||||
From version 1.2 *HMCi* is made compatible with the similar [nextract Plus](https://www.ibm.com/support/pages/nextract-plus-hmc-rest-api-performance-statistics) tool from Nigel Griffiths. This means that the Grafana [dashboards](https://grafana.com/grafana/dashboards/13819) made by Nigel are compatible with *HMCi*.
|
From version 1.2 *HMCi* is made compatible with the similar [nextract Plus](https://www.ibm.com/support/pages/nextract-plus-hmc-rest-api-performance-statistics) tool from Nigel Griffiths. This means that the Grafana [dashboards](https://grafana.com/grafana/dashboards/13819) made by Nigel are compatible with *HMCi* and the other way around.
|
||||||
|
|
||||||
### Start InfluxDB and Grafana at boot on RedHat 7+
|
### Start InfluxDB and Grafana at boot (systemd compatible Linux)
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
systemctl enable influxdb
|
systemctl enable influxdb
|
||||||
|
@ -94,8 +107,6 @@ systemctl start grafana-server
|
||||||
|
|
||||||
### InfluxDB Retention Policy
|
### InfluxDB Retention Policy
|
||||||
|
|
||||||
Per default the *hmci* influx database has no retention policy, so data will be kept forever. It is recommended to set a retention policy, which is shown below.
|
|
||||||
|
|
||||||
Examples for changing the default InfluxDB retention policy for the hmci database:
|
Examples for changing the default InfluxDB retention policy for the hmci database:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
|
|
|
@ -29,3 +29,7 @@ unsafe = true # Ignore SSL cert. errors
|
||||||
#unsafe = false # When false, validate SSL/TLS cerfificate, default is true
|
#unsafe = false # When false, validate SSL/TLS cerfificate, default is true
|
||||||
#energy = false # When false, do not collect energy metrics, default is true
|
#energy = false # When false, do not collect energy metrics, default is true
|
||||||
#trace = "/tmp/hmci-trace" # When present, store JSON metrics files from HMC into this folder
|
#trace = "/tmp/hmci-trace" # When present, store JSON metrics files from HMC into this folder
|
||||||
|
#excludeSystems = [ 'notThisSystem' ] # Collect metrics from all systems except those listed here
|
||||||
|
#includeSystems = [ 'onlyThisSystems' ] # Collcet metrics from no systems but those listed here
|
||||||
|
#excludePartitions = [ 'skipThisPartition' ] # Collect metrics from all partitions except those listed here
|
||||||
|
#includePartitions = [ 'onlyThisPartition' ] # Collect metrics from no partitions but those listed here
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
projectId = hmci
|
projectId = hmci
|
||||||
projectGroup = biz.nellemann.hmci
|
projectGroup = biz.nellemann.hmci
|
||||||
projectVersion = 1.2.6
|
projectVersion = 1.2.7
|
||||||
projectJavaVersion = 1.8
|
projectJavaVersion = 1.8
|
||||||
|
|
|
@ -23,11 +23,11 @@ import java.io.IOException;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
public final class Configuration {
|
public final class Configuration {
|
||||||
|
|
||||||
//private final static Logger log = LoggerFactory.getLogger(Configuration.class);
|
|
||||||
|
|
||||||
final private Long update;
|
final private Long update;
|
||||||
final private Long rescan;
|
final private Long rescan;
|
||||||
|
|
||||||
|
@ -103,6 +103,42 @@ public final class Configuration {
|
||||||
c.trace = null;
|
c.trace = null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if(hmcTable.contains(key+".excludeSystems")) {
|
||||||
|
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".excludeSystems").toList();
|
||||||
|
c.excludeSystems = tmpList.stream()
|
||||||
|
.map(object -> Objects.toString(object, null))
|
||||||
|
.collect(Collectors.toList());
|
||||||
|
} else {
|
||||||
|
c.excludeSystems = new ArrayList<>();
|
||||||
|
}
|
||||||
|
|
||||||
|
if(hmcTable.contains(key+".includeSystems")) {
|
||||||
|
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".includeSystems").toList();
|
||||||
|
c.includeSystems = tmpList.stream()
|
||||||
|
.map(object -> Objects.toString(object, null))
|
||||||
|
.collect(Collectors.toList());
|
||||||
|
} else {
|
||||||
|
c.includeSystems = new ArrayList<>();
|
||||||
|
}
|
||||||
|
|
||||||
|
if(hmcTable.contains(key+".excludePartitions")) {
|
||||||
|
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".excludePartitions").toList();
|
||||||
|
c.excludePartitions = tmpList.stream()
|
||||||
|
.map(object -> Objects.toString(object, null))
|
||||||
|
.collect(Collectors.toList());
|
||||||
|
} else {
|
||||||
|
c.excludePartitions = new ArrayList<>();
|
||||||
|
}
|
||||||
|
|
||||||
|
if(hmcTable.contains(key+".includePartitions")) {
|
||||||
|
List<Object> tmpList = hmcTable.getArrayOrEmpty(key+".includePartitions").toList();
|
||||||
|
c.includePartitions = tmpList.stream()
|
||||||
|
.map(object -> Objects.toString(object, null))
|
||||||
|
.collect(Collectors.toList());
|
||||||
|
} else {
|
||||||
|
c.includePartitions = new ArrayList<>();
|
||||||
|
}
|
||||||
|
|
||||||
list.add(c);
|
list.add(c);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -193,6 +229,10 @@ public final class Configuration {
|
||||||
Boolean unsafe = false;
|
Boolean unsafe = false;
|
||||||
Boolean energy = true;
|
Boolean energy = true;
|
||||||
String trace;
|
String trace;
|
||||||
|
List<String> excludeSystems;
|
||||||
|
List<String> includeSystems;
|
||||||
|
List<String> excludePartitions;
|
||||||
|
List<String> includePartitions;
|
||||||
Long update = 30L;
|
Long update = 30L;
|
||||||
Long rescan = 60L;
|
Long rescan = 60L;
|
||||||
|
|
||||||
|
|
|
@ -26,6 +26,7 @@ import java.io.IOException;
|
||||||
import java.time.Duration;
|
import java.time.Duration;
|
||||||
import java.time.Instant;
|
import java.time.Instant;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
|
import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.concurrent.atomic.AtomicBoolean;
|
import java.util.concurrent.atomic.AtomicBoolean;
|
||||||
|
|
||||||
|
@ -48,6 +49,10 @@ class HmcInstance implements Runnable {
|
||||||
private File traceDir;
|
private File traceDir;
|
||||||
private Boolean doTrace = false;
|
private Boolean doTrace = false;
|
||||||
private Boolean doEnergy = true;
|
private Boolean doEnergy = true;
|
||||||
|
private List<String> excludeSystems;
|
||||||
|
private List<String> includeSystems;
|
||||||
|
private List<String> excludePartitions;
|
||||||
|
private List<String> includePartitions;
|
||||||
|
|
||||||
HmcInstance(HmcObject configHmc, InfluxClient influxClient) {
|
HmcInstance(HmcObject configHmc, InfluxClient influxClient) {
|
||||||
this.hmcId = configHmc.name;
|
this.hmcId = configHmc.name;
|
||||||
|
@ -71,6 +76,10 @@ class HmcInstance implements Runnable {
|
||||||
log.error("HmcInstance() - trace error: " + e.getMessage());
|
log.error("HmcInstance() - trace error: " + e.getMessage());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
this.excludeSystems = configHmc.excludeSystems;
|
||||||
|
this.includeSystems = configHmc.includeSystems;
|
||||||
|
this.excludePartitions = configHmc.excludePartitions;
|
||||||
|
this.includePartitions = configHmc.includePartitions;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -154,20 +163,46 @@ class HmcInstance implements Runnable {
|
||||||
|
|
||||||
// Add to list of known systems
|
// Add to list of known systems
|
||||||
if(!systems.containsKey(systemId)) {
|
if(!systems.containsKey(systemId)) {
|
||||||
systems.put(systemId, system);
|
|
||||||
log.info("discover() - Found ManagedSystem: " + system);
|
// Check excludeSystems and includeSystems
|
||||||
if(doEnergy) {
|
if(!excludeSystems.contains(system.name) && includeSystems.isEmpty()) {
|
||||||
hmcRestClient.enableEnergyMonitoring(system);
|
systems.put(systemId, system);
|
||||||
|
log.info("discover() - Adding ManagedSystem: {}", system);
|
||||||
|
if (doEnergy) {
|
||||||
|
hmcRestClient.enableEnergyMonitoring(system);
|
||||||
|
}
|
||||||
|
} else if(!includeSystems.isEmpty() && includeSystems.contains(system.name)) {
|
||||||
|
systems.put(systemId, system);
|
||||||
|
log.info("discover() - Adding ManagedSystem (include): {}", system);
|
||||||
|
if (doEnergy) {
|
||||||
|
hmcRestClient.enableEnergyMonitoring(system);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
log.debug("discover() - Skipping ManagedSystem: {}", system);
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get partitions for this system
|
// Get partitions for this system
|
||||||
try {
|
try {
|
||||||
tmpPartitions.putAll(hmcRestClient.getLogicalPartitionsForManagedSystem(system));
|
tmpPartitions.putAll(hmcRestClient.getLogicalPartitionsForManagedSystem(system));
|
||||||
|
|
||||||
if(!tmpPartitions.isEmpty()) {
|
if(!tmpPartitions.isEmpty()) {
|
||||||
partitions.clear();
|
partitions.clear();
|
||||||
partitions.putAll(tmpPartitions);
|
//partitions.putAll(tmpPartitions);
|
||||||
|
tmpPartitions.forEach((lparKey, lpar) -> {
|
||||||
|
if(!excludePartitions.contains(lpar.name) && includePartitions.isEmpty()) {
|
||||||
|
partitions.put(lparKey, lpar);
|
||||||
|
log.info("discover() - Adding LogicalPartition: {}", lpar);
|
||||||
|
} else if(!includePartitions.isEmpty() && includePartitions.contains(lpar.name)) {
|
||||||
|
partitions.put(lparKey, lpar);
|
||||||
|
log.info("discover() - Adding LogicalPartition (include): {}", lpar);
|
||||||
|
} else {
|
||||||
|
log.debug("discover() - Skipping LogicalPartition: {}", lpar);
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
log.warn("discover() - getLogicalPartitions error: {}", e.getMessage());
|
log.warn("discover() - getLogicalPartitions error: {}", e.getMessage());
|
||||||
}
|
}
|
||||||
|
|
|
@ -30,4 +30,17 @@ class ConfigurationTest extends Specification {
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void "test exclude and include options"() {
|
||||||
|
|
||||||
|
when:
|
||||||
|
Configuration conf = new Configuration(testConfigurationFile)
|
||||||
|
|
||||||
|
then:
|
||||||
|
conf.getHmc().get(0).excludeSystems.contains("notThisSys")
|
||||||
|
conf.getHmc().get(0).includeSystems.contains("onlyThisSys")
|
||||||
|
conf.getHmc().get(0).excludePartitions.contains("notThisPartition")
|
||||||
|
conf.getHmc().get(0).includePartitions.contains("onlyThisPartition")
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -20,6 +20,11 @@ username = "hmci"
|
||||||
password = "hmcihmci"
|
password = "hmcihmci"
|
||||||
unsafe = true # Ignore SSL cert. errors
|
unsafe = true # Ignore SSL cert. errors
|
||||||
energy = false # Do not try to collect energy metrics
|
energy = false # Do not try to collect energy metrics
|
||||||
|
excludeSystems = [ 'notThisSys', 'andNotThisSys' ]
|
||||||
|
includeSystems = [ 'onlyThisSys', 'andOnlyThisSys' ]
|
||||||
|
excludePartitions = [ 'notThisPartition' ]
|
||||||
|
includePartitions = [ 'onlyThisPartition' ]
|
||||||
|
|
||||||
|
|
||||||
# Example
|
# Example
|
||||||
#[hmc.site2]
|
#[hmc.site2]
|
||||||
|
|
Loading…
Reference in a new issue