For Zabbix version: 6.2 and higher. The template to monitor SAN NetApp FAS3220 cluster by Zabbix SNMP agent.
This template was tested on:
See Zabbix template operation for basic instructions.
1. Create a host for FAS3220 with cluster management IP as SNMPv2 interface.
2. Link the template to the host.
3. Customize macro values if needed.
No specific Zabbix configuration is required.
Name | Description | Default |
---|---|---|
{$CPU.UTIL.CRIT} | The critical threshold of the CPU utilization in %. |
90 |
{$FAS3220.FS.AVAIL.MIN.CRIT} | Minimum available space on the disk. Can be used with {#FSNAME} as context. |
10G |
{$FAS3220.FS.NAME.MATCHES} | This macro is used in filesystems discovery. Can be overridden on the host or linked template level. |
.* |
{$FAS3220.FS.NAME.NOT_MATCHES} | This macro is used in filesystems discovery. Can be overridden on the host or linked template level. |
snapshot |
{$FAS3220.FS.PUSED.MAX.CRIT} | Maximum percentage of disk used. Can be used with {#FSNAME} as context. |
90 |
{$FAS3220.FS.TIME} | The time during which disk usage may exceed the threshold. Can be used with {#FSNAME} as context. |
10m |
{$FAS3220.FS.TYPE.MATCHES} | This macro is used in filesystems discovery. Can be overridden on the host or linked template level. Value should be integer: 2 - flexibleVolume, 3 - aggregate, 4 - stripedAggregate, 5 - stripedVolume. |
.* |
{$FAS3220.FS.TYPE.NOT_MATCHES} | This macro is used in filesystems discovery. Can be overridden on the host or linked template level. Value should be integer: 2 - flexibleVolume, 3 - aggregate, 4 - stripedAggregate, 5 - stripedVolume. |
CHANGE_IF_NEEDED |
{$FAS3220.FS.USE.PCT} | Macro define what threshold will be used for disk space trigger: 0 - use Bytes ({$FAS3220.FS.AVAIL.MIN.CRIT}) 1 - use percents ({$FAS3220.FS.PUSED.MAX.CRIT}) Can be used with {#FSNAME} as context. |
1 |
{$FAS3220.NET.PORT.NAME.MATCHES} | This macro is used in net ports discovery. Can be overridden on the host or linked template level. |
.* |
{$FAS3220.NET.PORT.NAME.NOT_MATCHES} | This macro is used in net ports discovery. Can be overridden on the host or linked template level. |
CHANGE_IF_NEEDED |
{$FAS3220.NET.PORT.ROLE.MATCHES} | This macro is used in net ports discovery. Can be overridden on the host or linked template level. {#ROLE} is integer. Possible values: 0 - undef 1 - cluster 2 - data 3 - node-mgmt 4 - intercluster 5 - cluster-mgmt |
.* |
{$FAS3220.NET.PORT.ROLE.NOT_MATCHES} | This macro is used in net ports discovery. Can be overridden on the host or linked template level. {#ROLE} is integer. Possible values: 0 - undef 1 - cluster 2 - data 3 - node-mgmt 4 - intercluster 5 - cluster-mgmt |
CHANGE_IF_NEEDED |
{$FAS3220.NET.PORT.TYPE.MATCHES} | This macro is used in net ports discovery. Can be overridden on the host or linked template level. {#TYPE} is integer. Possible values: physical, if-group, vlan, undef. |
.* |
{$FAS3220.NET.PORT.TYPE.NOT_MATCHES} | This macro is used in net ports discovery. Can be overridden on the host or linked template level. {#TYPE} is integer. Possible values: physical, if-group, vlan, undef. |
CHANGE_IF_NEEDED |
{$ICMPLOSSWARN} | - |
20 |
{$ICMPRESPONSETIME_WARN} | - |
0.15 |
{$IF.ERRORS.WARN} | - |
`` |
{$IF.UTIL.MAX} | - |
95 |
{$SNMP.TIMEOUT} | - |
5m |
There are no template links in this template.
Name | Description | Type | Key and additional info | |
---|---|---|---|---|
Cluster metrics discovery | Discovery of Cluster metrics per node |
SNMP | fas3220.cluster.discovery | |
CPU discovery | Discovery of CPU metrics per node |
SNMP | fas3220.cpu.discovery | |
Filesystems discovery | Filesystems discovery with filter. |
SNMP | fas3220.fs.discovery Filter: AND- {#FSTYPE} MATCHESREGEX - {#FSTYPE} NOTMATCHESREGEX - {#FSNAME} MATCHESREGEX - {#FSNAME} NOTMATCHESREGEX Overrides: Do not discover aggregate metrics |
4<br> - ITEM_PROTOTYPE LIKE Saved`- NO_DISCOVER |
HA discovery | Discovery of high availability metrics per node |
SNMP | fas3220.ha.discovery | |
Network ports discovery | Network interfaces discovery with filter. |
SNMP | fas3220.net.discovery Preprocessing: - JAVASCRIPT: Filter: AND- {#TYPE} MATCHESREGEX - {#TYPE} NOTMATCHESREGEX - {#ROLE} MATCHESREGEX - {#TYPE} NOTMATCHESREGEX - {#IFNAME} MATCHESREGEX - {#IFNAME} NOTMATCHES_REGEX |
Group | Name | Description | Type | Key and additional info |
---|---|---|---|---|
CPU | Node {#NODE.NAME}: CPU utilization | The average, over the last minute, of the percentage of time that this processor was not idle. |
SNMP | fas3220.cpu[cDOTCpuBusyTimePerCent, "{#NODE.NAME}"] |
General | SNMP traps (fallback) | The item is used to collect all SNMP traps unmatched by other snmptrap items |
SNMP_TRAP | snmptrap.fallback |
General | System location | MIB: SNMPv2-MIB The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string. |
SNMP | system.location[sysLocation.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System contact details | MIB: SNMPv2-MIB The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string. |
SNMP | system.contact[sysContact.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System object ID | MIB: SNMPv2-MIB The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining |
SNMP | system.objectid[sysObjectID.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System name | MIB: SNMPv2-MIB An administratively-assigned name for this managed node.By convention, this is the node's fully-qualified domain name. If the name is unknown, the value is the zero-length string. |
SNMP | system.name Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System description | MIB: SNMPv2-MIB A textual description of the entity. This value should include the full name and version identification of the system's hardware type, software operating-system, and networking software. |
SNMP | system.descr[sysDescr.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | NetApp FAS3220: Product version | MIB: NETAPP-MIB Version string for the software running on this platform. |
SNMP | fas3220.inventory[productVersion] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | NetApp FAS3220: Product firmware version | Version string for the firmware running on this platform. |
SNMP | fas3220.inventory[productFirmwareVersion] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | NetApp FAS3220: Failed disks count | The number of disks that are currently broken. |
SNMP | fas3220.disk[diskFailedCount] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | NetApp FAS3220: Failed disks message | If diskFailedCount is non-zero, this is a string describing the failed disk or disks. Each failed disk is described. |
SNMP | fas3220.disk[diskFailedMessage] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Location | Node Location. Same as sysLocation for a specific node. |
SNMP | fas3220.cluster[nodeLocation, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Model | Node Model. Same as productModel for a specific node. |
SNMP | fas3220.cluster[nodeModel, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Serial number | Node Serial Number. Same as productSerialNum for a specific node. |
SNMP | fas3220.cluster[nodeSerialNumber, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Uptime | Node uptime. Same as sysUpTime for a specific node. |
SNMP | fas3220.cluster[nodeUptime, "{#NODE.NAME}"] Preprocessing: - MULTIPLIER: |
NetApp FAS3220 | Node {#NODE.NAME}: Health | Whether or not the node can communicate with the cluster. |
SNMP | fas3220.cluster[nodeHealth, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: NVRAM battery status | An indication of the current status of the NVRAM battery or batteries. Batteries which are fully or partially discharged may not fully protect the system during a crash. The end-of-life status values are based on the manufacturer's recommended life for the batteries. Possible values: ok(1), partiallyDischarged(2), fullyDischarged(3), notPresent(4), nearEndOfLife(5), atEndOfLife(6), unknown(7), overCharged(8), fullyCharged(9). |
SNMP | fas3220.cluster[nodeNvramBatteryStatus, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Over-temperature | An indication of whether the hardware is currently operating outside of its recommended temperature range. The hardware will shutdown if the temperature exceeds critical thresholds. |
SNMP | fas3220.cluster[nodeEnvOverTemperature, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Failed FAN count | Count of the number of chassis fans that are not operating within the recommended RPM range. |
SNMP | fas3220.cluster[nodeEnvFailedFanCount, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Failed FAN message | Text message describing current condition of chassis fans. This is useful only if envFailedFanCount is not zero. |
SNMP | fas3220.cluster[nodeEnvFailedFanMessage, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Degraded power supplies count | Count of the number of power supplies that are in degraded mode. |
SNMP | fas3220.cluster[nodeEnvFailedPowerSupplyCount, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Degraded power supplies message | Text message describing the state of any power supplies that are currently degraded. This is useful only if envFailedPowerSupplyCount is not zero. |
SNMP | fas3220.cluster[nodeEnvFailedPowerSupplyMessage, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: Cannot takeover cause | The reason node cannot take over it's HA partner {#PARTNER.NAME}. Possible states: ok(1), unknownReason(2), disabledByOperator(3), interconnectOffline(4), disabledByPartner(5), takeoverFailed(6), mailboxIsInDegradedState(7), partnermailboxIsInUninitialisedState(8), mailboxVersionMismatch(9), nvramSizeMismatch(10), kernelVersionMismatch(11), partnerIsInBootingStage(12), diskshelfIsTooHot(13), partnerIsPerformingRevert(14), nodeIsPerformingRevert(15), sametimePartnerIsAlsoTryingToTakeUsOver(16), alreadyInTakenoverMode(17), nvramLogUnsynchronized(18), stateofBackupMailboxIsDoubtful(19). |
SNMP | fas3220.ha[haCannotTakeoverCause, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE.NAME}: HA settings | High Availability configuration settings. The value notConfigured(1) indicates that the HA is not licensed. The thisNodeDead(5) setting indicates that this node has been takenover. |
SNMP | fas3220.ha[haSettings, "{#NODE.NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | {#VSERVER}{#FSNAME}: Total space used | The total disk space that is in use on {#FSNAME}. |
SNMP | fas3220.fs[df64UsedKBytes, "{#VSERVER}{#FSNAME}"] Preprocessing: - MULTIPLIER: |
NetApp FAS3220 | {#VSERVER}{#FSNAME}: Total space available | The total disk space that is free for use on {#FSNAME}. |
SNMP | fas3220.fs[df64AvailKBytes, "{#VSERVER}{#FSNAME}"] Preprocessing: - MULTIPLIER: |
NetApp FAS3220 | {#VSERVER}{#FSNAME}: Total space | The total capacity in Bytes for {#FSNAME}. |
SNMP | fas3220.fs[df64TotalKBytes, "{#VSERVER}{#FSNAME}"] Preprocessing: - MULTIPLIER: |
NetApp FAS3220 | {#VSERVER}{#FSNAME}: Used space percents | The percentage of disk space currently in use on {#FSNAME}. |
SNMP | fas3220.fs[dfPerCentKBytesCapacity, "{#VSERVER}{#FSNAME}"] |
NetApp FAS3220 | {#VSERVER}{#FSNAME}: Saved by compression percents | Provides the percentage of compression savings in a volume, which is ((comprsaved/used)) * 10(comprsaved + 0). This is only returned for volumes. |
SNMP | fas3220.fs[dfCompressSavedPercent, "{#VSERVER}{#FSNAME}"] |
NetApp FAS3220 | {#VSERVER}{#FSNAME}: Saved by deduplication percents | Provides the percentage of deduplication savings in a volume, which is ((dedupsaved/(dedupsaved + used)) * 100). This is only returned for volumes. |
SNMP | fas3220.fs[dfDedupeSavedPercent, "{#VSERVER}{#FSNAME}"] |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): Up by an administrator | Indicates whether the port status is set 'UP' by an administrator. |
SNMP | fas3220.net.port[netportUpAdmin, "{#NODE}", "{#IFNAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): Role | Role of the port. A port must have one of the following roles: cluster(1), data(2), mgmt(3), intercluster(4), cluster-mgmt(5) or undef(0). The cluster port is used to communicate to other node(s) in the cluster. The data port services clients' requests. It is where all the file requests come in. The management port is used by administrator to manage resources within a node. The intercluster port is used to communicate to other cluster. The cluster-mgmt port is used to manage resources within the cluster. The undef role is for the port that has not yet been assigned a role. |
SNMP | fas3220.net.port[netportRole, "{#NODE}", "{#IFNAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): Speed | The speed appears on the port. It can be either undef(0), auto(1), ten Mb/s(2), hundred Mb/s(3), one Gb/s(4), or ten Gb/s(5). |
SNMP | fas3220.net.port[netportSpeedOper, "{#NODE}", "{#IFNAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): Bits received | The total number of octets received on the interface, including framing characters. |
SNMP | fas3220.net.if[if64InOctets, "{#NODE}", "{#IFNAME}"] Preprocessing: - MULTIPLIER: - CHANGEPERSECOND |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): Bits sent | The total number of octets transmitted out of the interface, including framing characters. |
SNMP | fas3220.net.if[if64OutOctets, "{#NODE}", "{#IFNAME}"] Preprocessing: - MULTIPLIER: - CHANGEPERSECOND |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): State | The link-state of the port. Normally it is either UP(2) or DOWN(3). |
SNMP | fas3220.net.port[netportLinkState, "{#NODE}", "{#IFNAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): Health | The health status of the port. |
SNMP | fas3220.net.port[netportHealthStatus, "{#NODE}", "{#IFNAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
NetApp FAS3220 | Node {#NODE}: port {#IFNAME} ({#TYPE}): Health degraded reason | The list of reasons why the port is marked as degraded. |
SNMP | fas3220.net.port[netportDegradedReason, "{#NODE}", "{#IFNAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Network interfaces | Node {#NODE}: port {#IFNAME} ({#TYPE}): Inbound packets with errors | MIB: IF-MIB The number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. |
SNMP | fas3220.net.if[if64InErrors, "{#NODE}", "{#IFNAME}"] Preprocessing: - CHANGEPERSECOND |
Network interfaces | Node {#NODE}: port {#IFNAME} ({#TYPE}): Outbound packets with errors | MIB: IF-MIB The number of outbound packets that could not be transmitted because of errors. |
SNMP | fas3220.net.if[if64OutErrors, "{#NODE}", "{#IFNAME}"] Preprocessing: - CHANGEPERSECOND |
Network interfaces | Node {#NODE}: port {#IFNAME} ({#TYPE}): Inbound packets discarded | MIB: IF-MIB The number of inbound packets that were chosen to be discarded even though no errors had been detected to prevent their being deliverable to a higher-layer protocol. One possible reason for discarding such a packet could be to free up buffer space. |
SNMP | fas3220.net.if[if64InDiscards, "{#NODE}", "{#IFNAME}"] Preprocessing: - CHANGEPERSECOND |
Network interfaces | Node {#NODE}: port {#IFNAME} ({#TYPE}): Outbound packets discarded | MIB: IF-MIB The number of outbound packets that were chosen to be discarded even though no errors had been detected to prevent their being transmitted. One possible reason for discarding such a packet could be to free up buffer space. |
SNMP | fas3220.net.if[if64OutDiscards, "{#NODE}", "{#IFNAME}"] Preprocessing: - CHANGEPERSECOND |
Status | Uptime (network) | MIB: SNMPv2-MIB The time (in hundredths of a second) since the network management portion of the system was last re-initialized. |
SNMP | system.net.uptime[sysUpTime.0] Preprocessing: - MULTIPLIER: |
Status | Uptime (hardware) | MIB: HOST-RESOURCES-MIB The amount of time since this host was last initialized. Note that this is different from sysUpTime in the SNMPv2-MIB [RFC1907] because sysUpTime is the uptime of the network management portion of the system. |
SNMP | system.hw.uptime[hrSystemUptime.0] Preprocessing: - CHECKNOTSUPPORTED ⛔️ON_FAIL: - MULTIPLIER: |
Status | SNMP agent availability | Availability of SNMP checks on the host. The value of this item corresponds to availability icons in the host list. Possible value: 0 - not available 1 - available 2 - unknown |
INTERNAL | zabbix[host,snmp,available] |
Status | ICMP ping | - |
SIMPLE | icmpping |
Status | ICMP loss | - |
SIMPLE | icmppingloss |
Status | ICMP response time | - |
SIMPLE | icmppingsec |
Name | Description | Expression | Severity | Dependencies and additional info |
---|---|---|---|---|
Node {#NODE.NAME}: High CPU utilization | CPU utilization is too high. The system might be slow to respond. |
min(/NetApp FAS3220 by SNMP/fas3220.cpu[cDOTCpuBusyTimePerCent, "{#NODE.NAME}"],5m)>{$CPU.UTIL.CRIT} |
WARNING | |
System name has changed | System name has changed. Ack to close. |
last(/NetApp FAS3220 by SNMP/system.name,#1)<>last(/NetApp FAS3220 by SNMP/system.name,#2) and length(last(/NetApp FAS3220 by SNMP/system.name))>0 |
INFO | Manual close: YES |
NetApp FAS3220: Number of failed disks has changed | {{ITEM.LASTVALUE2}.regsub("(.*)", \1)} |
last(/NetApp FAS3220 by SNMP/fas3220.disk[diskFailedCount])>0 and last(/NetApp FAS3220 by SNMP/fas3220.disk[diskFailedMessage],#1)<>last(/NetApp FAS3220 by SNMP/fas3220.disk[diskFailedMessage],#2) Recovery expression: last(/NetApp FAS3220 by SNMP/fas3220.disk[diskFailedCount])=0 |
WARNING | |
Node {#NODE.NAME}: Host has been restarted | Uptime is less than 10 minutes. |
last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeUptime, "{#NODE.NAME}"])<10m |
INFO | Manual close: YES |
Node {#NODE.NAME}: Node can not communicate with the cluster | - |
last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeHealth, "{#NODE.NAME}"])=0 |
HIGH | Manual close: YES |
Node {#NODE.NAME}: NVRAM battery status is not OK | - |
last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeNvramBatteryStatus, "{#NODE.NAME}"])<>1 |
AVERAGE | Manual close: YES |
Node {#NODE.NAME}: Temperature is over than recommended | The hardware will shutdown if the temperature exceeds critical thresholds. |
last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeEnvOverTemperature, "{#NODE.NAME}"])=2 |
HIGH | |
Node {#NODE.NAME}: Failed FAN count is over than zero | {{ITEM.VALUE2}.regsub("(.*)", \1)} |
last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeEnvFailedFanCount, "{#NODE.NAME}"])>0 and last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeEnvFailedFanMessage, "{#NODE.NAME}"])=last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeEnvFailedFanMessage, "{#NODE.NAME}"]) |
HIGH | |
Node {#NODE.NAME}: Degraded power supplies count is more than zero | {{ITEM.VALUE2}.regsub("(.*)", \1)} |
last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeEnvFailedPowerSupplyCount, "{#NODE.NAME}"])>0 and last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeEnvFailedPowerSupplyMessage, "{#NODE.NAME}"])=last(/NetApp FAS3220 by SNMP/fas3220.cluster[nodeEnvFailedPowerSupplyMessage, "{#NODE.NAME}"]) |
AVERAGE | |
Node {#NODE.NAME}: Node cannot takeover it's HA partner {#PARTNER.NAME}. Reason: {ITEM.VALUE} | Possible reasons: unknownReason(2), disabledByOperator(3), interconnectOffline(4), disabledByPartner(5), takeoverFailed(6), mailboxIsInDegradedState(7), partnermailboxIsInUninitialisedState(8), mailboxVersionMismatch(9), nvramSizeMismatch(10), kernelVersionMismatch(11), partnerIsInBootingStage(12), diskshelfIsTooHot(13), partnerIsPerformingRevert(14), nodeIsPerformingRevert(15), sametimePartnerIsAlsoTryingToTakeUsOver(16), alreadyInTakenoverMode(17), nvramLogUnsynchronized(18), stateofBackupMailboxIsDoubtful(19). |
last(/NetApp FAS3220 by SNMP/fas3220.ha[haCannotTakeoverCause, "{#NODE.NAME}"])<>1 |
HIGH | |
Node {#NODE.NAME}: Node has been taken over | The thisNodeDead(5) setting indicates that this node has been takenover. |
last(/NetApp FAS3220 by SNMP/fas3220.ha[haSettings, "{#NODE.NAME}"])=5 |
HIGH | |
Node {#NODE.NAME}: HA is not licensed | The value notConfigured(1) indicates that the HA is not licensed. |
last(/NetApp FAS3220 by SNMP/fas3220.ha[haSettings, "{#NODE.NAME}"])=1 |
AVERAGE | |
{#VSERVER}{#FSNAME}: Disk space is too low | - |
min(/NetApp FAS3220 by SNMP/fas3220.fs[df64AvailKBytes, "{#VSERVER}{#FSNAME}"],{$FAS3220.FS.TIME:"{#FSNAME}"})<{$FAS3220.FS.AVAIL.MIN.CRIT:"{#FSNAME}"} and {$FAS3220.FS.USE.PCT:"{#FSNAME}"}=0 |
HIGH | |
{#VSERVER}{#FSNAME}: Disk space is too low | - |
max(/NetApp FAS3220 by SNMP/fas3220.fs[dfPerCentKBytesCapacity, "{#VSERVER}{#FSNAME}"],{$FAS3220.FS.TIME:"{#FSNAME}"})>{$FAS3220.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and {$FAS3220.FS.USE.PCT:"{#FSNAME}"}=1 |
HIGH | |
Node {#NODE}: port {#IFNAME} ({#TYPE}): Link down | Link state is not UP and the port status is set 'UP' by an administrator. |
last(/NetApp FAS3220 by SNMP/fas3220.net.port[netportLinkState, "{#NODE}", "{#IFNAME}"])<>2 and last(/NetApp FAS3220 by SNMP/fas3220.net.port[netportUpAdmin, "{#NODE}", "{#IFNAME}"])=1 |
AVERAGE | Manual close: YES |
Node {#NODE}: port {#IFNAME} ({#TYPE}): Port is not healthy | {{ITEM.LASTVALUE2}.regsub("(.*)", \1)} |
last(/NetApp FAS3220 by SNMP/fas3220.net.port[netportHealthStatus, "{#NODE}", "{#IFNAME}"])<>0 and length(last(/NetApp FAS3220 by SNMP/fas3220.net.port[netportDegradedReason, "{#NODE}", "{#IFNAME}"]))>0 |
INFO | |
Node {#NODE}: port {#IFNAME} ({#TYPE}): High error rate | Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold |
min(/NetApp FAS3220 by SNMP/fas3220.net.if[if64InErrors, "{#NODE}", "{#IFNAME}"],5m)>{$IF.ERRORS.WARN:"{#IFNAME}"} or min(/NetApp FAS3220 by SNMP/fas3220.net.if[if64OutErrors, "{#NODE}", "{#IFNAME}"],5m)>{$IF.ERRORS.WARN:"{#IFNAME}"} Recovery expression: max(/NetApp FAS3220 by SNMP/fas3220.net.if[if64InErrors, "{#NODE}", "{#IFNAME}"],5m)<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and max(/NetApp FAS3220 by SNMP/fas3220.net.if[if64OutErrors, "{#NODE}", "{#IFNAME}"],5m)<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 |
WARNING | Manual close: YES |
Host has been restarted | Uptime is less than 10 minutes. |
(last(/NetApp FAS3220 by SNMP/system.hw.uptime[hrSystemUptime.0])>0 and last(/NetApp FAS3220 by SNMP/system.hw.uptime[hrSystemUptime.0])<10m) or (last(/NetApp FAS3220 by SNMP/system.hw.uptime[hrSystemUptime.0])=0 and last(/NetApp FAS3220 by SNMP/system.net.uptime[sysUpTime.0])<10m) |
WARNING | Manual close: YES Depends on: - No SNMP data collection |
No SNMP data collection | SNMP is not available for polling. Please check device connectivity and SNMP settings. |
max(/NetApp FAS3220 by SNMP/zabbix[host,snmp,available],{$SNMP.TIMEOUT})=0 |
WARNING | Depends on: - Unavailable by ICMP ping |
Unavailable by ICMP ping | Last three attempts returned timeout. Please check device connectivity. |
max(/NetApp FAS3220 by SNMP/icmpping,#3)=0 |
HIGH | |
High ICMP ping loss | - |
min(/NetApp FAS3220 by SNMP/icmppingloss,5m)>{$ICMP_LOSS_WARN} and min(/NetApp FAS3220 by SNMP/icmppingloss,5m)<100 |
WARNING | Depends on: - Unavailable by ICMP ping |
High ICMP ping response time | - |
avg(/NetApp FAS3220 by SNMP/icmppingsec,5m)>{$ICMP_RESPONSE_TIME_WARN} |
WARNING | Depends on: - High ICMP ping loss - Unavailable by ICMP ping |
Please report any issues with the template at https://support.zabbix.com.
You can also provide feedback, discuss the template, or ask for help at ZABBIX forums.
For Zabbix version: 6.2 and higher
The template to monitor SAN NetApp AFF A700 cluster by Zabbix HTTP agent.
This template was tested on:
See Zabbix template operation for basic instructions.
1. Create a host for AFF A700 with cluster management IP as the Zabbix agent interface.
2. Link the template to the host.
3. Customize macro values if needed.
No specific Zabbix configuration is required.
Name | Description | Default |
---|---|---|
{$HTTP.AGENT.TIMEOUT} | The HTTP agent timeout to wait for a response from AFF700. |
3s |
{$PASSWORD} | AFF700 user password. |
`` |
{$URL} | AFF700 cluster URL address. |
`` |
{$USERNAME} | AFF700 user name. |
`` |
There are no template links in this template.
Name | Description | Type | Key and additional info |
---|---|---|---|
Chassis discovery | - |
HTTP_AGENT | netapp.chassis.discovery Preprocessing: - JAVASCRIPT: |
Disks discovery | - |
HTTP_AGENT | netapp.disks.discovery Preprocessing: - JAVASCRIPT: |
Ethernet ports discovery | - |
HTTP_AGENT | netapp.ports.ether.discovery Preprocessing: - JAVASCRIPT: |
FC ports discovery | - |
HTTP_AGENT | netapp.ports.fc.discovery Preprocessing: - JAVASCRIPT: |
FRUs discovery | - |
DEPENDENT | netapp.frus.discovery Preprocessing: - JAVASCRIPT: - DISCARDUNCHANGEDHEARTBEAT: |
LUNs discovery | - |
HTTP_AGENT | netapp.luns.discovery Preprocessing: - JAVASCRIPT: |
Nodes discovery | - |
HTTP_AGENT | netapp.nodes.discovery Preprocessing: - JAVASCRIPT: |
SVMs discovery | - |
HTTP_AGENT | netapp.svms.discovery Preprocessing: - JAVASCRIPT: |
Volumes discovery | - |
HTTP_AGENT | netapp.volumes.discovery Preprocessing: - JAVASCRIPT: |
Group | Name | Description | Type | Key and additional info |
---|---|---|---|---|
General | Cluster software version | This returns the cluster version information. When the cluster has more than one node, the cluster version is equivalent to the lowest of generation, major, and minor versions on all nodes. |
DEPENDENT | netapp.cluster.version Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | Cluster name | The name of the cluster. |
DEPENDENT | netapp.cluster.name Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | Cluster location | The location of the cluster. |
DEPENDENT | netapp.cluster.location Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | Cluster status | The status of the cluster: ok, error, partialnodata, partialnoresponse, partialothererror, negativedelta, backfilleddata, inconsistentdeltatime, inconsistentolddata. |
DEPENDENT | netapp.cluster.status Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | Cluster throughput, other rate | Throughput bytes observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.cluster.statistics.throughput.other.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster throughput, read rate | Throughput bytes observed at the storage object. Performance metric for read I/O operations. |
DEPENDENT | netapp.cluster.statistics.throughput.read.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster throughput, write rate | Throughput bytes observed at the storage object. Performance metric for write I/O operations. |
DEPENDENT | netapp.cluster.statistics.throughput.write.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster throughput, total rate | Throughput bytes observed at the storage object. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.cluster.statistics.throughput.total.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster IOPS, other rate | The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.cluster.statistics.iops.other.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster IOPS, read rate | The number of I/O operations observed at the storage object. Performance metric for read I/O operations. |
DEPENDENT | netapp.cluster.statistics.iops.read.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster IOPS, write rate | The number of I/O operations observed at the storage object. Performance metric for write I/O operations. |
DEPENDENT | netapp.cluster.statistics.iops.write.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster IOPS, total rate | The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.cluster.statistics.iops.total.rate Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | Cluster latency, other | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
CALCULATED | netapp.cluster.statistics.latency.other Expression: (last(//netapp.cluster.statistics.latency_raw.other) - last(//netapp.cluster.statistics.latency_raw.other,#2)) / (last(//netapp.cluster.statistics.iops_raw.other) - last(//netapp.cluster.statistics.iops_raw.other,#2) + (last(//netapp.cluster.statistics.iops_raw.other) - last(//netapp.cluster.statistics.iops_raw.other,#2) = 0) ) * 0.001 |
General | Cluster latency, read | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for read I/O operations. |
CALCULATED | netapp.cluster.statistics.latency.read Expression: (last(//netapp.cluster.statistics.latency_raw.read) - last(//netapp.cluster.statistics.latency_raw.read,#2)) / ( last(//netapp.cluster.statistics.iops_raw.read) - last(//netapp.cluster.statistics.iops_raw.read,#2) + (last(//netapp.cluster.statistics.iops_raw.read) - last(//netapp.cluster.statistics.iops_raw.read,#2) = 0) ) * 0.001 |
General | Cluster latency, write | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for write I/O operations. |
CALCULATED | netapp.cluster.statistics.latency.write Expression: (last(//netapp.cluster.statistics.latency_raw.write) - last(//netapp.cluster.statistics.latency_raw.write,#2)) / ( last(//netapp.cluster.statistics.iops_raw.write) - last(//netapp.cluster.statistics.iops_raw.write,#2) + (last(//netapp.cluster.statistics.iops_raw.write) - last(//netapp.cluster.statistics.iops_raw.write,#2) = 0) ) * 0.001 |
General | Cluster latency, total | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric aggregated over all types of I/O operations. |
CALCULATED | netapp.cluster.statistics.latency.total Expression: (last(//netapp.cluster.statistics.latency_raw.total) - last(//netapp.cluster.statistics.latency_raw.total,#2)) / ( last(//netapp.cluster.statistics.iops_raw.total) - last(//netapp.cluster.statistics.iops_raw.total,#2) + (last(//netapp.cluster.statistics.iops_raw.total) - last(//netapp.cluster.statistics.iops_raw.total,#2) = 0) ) * 0.001 |
General | {#NODENAME}: Software version | This returns the cluster version information. When the cluster has more than one node, the cluster version is equivalent to the lowest of generation, major, and minor versions on all nodes. |
DEPENDENT | netapp.node.version[{#NODENAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#NODENAME}: Location | The location of the node. |
DEPENDENT | netapp.nodes.location[{#NODENAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#NODENAME}: State | State of the node: up - Node is up and operational. booting - Node is booting up. down - Node has stopped or is dumping core. takenover - Node has been taken over by its HA partner and is not yet waiting for giveback. waitingfor_giveback - Node has been taken over by its HA partner and is waiting for the HA partner to giveback disks. degraded - Node has one or more critical services offline. unknown - Node or its HA partner cannot be contacted and there is no information on the node's state. |
DEPENDENT | netapp.nodes.state[{#NODENAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#NODENAME}: Membership | Possible values: available - If a node is available, this means it is detected on the internal cluster network and can be added to the cluster. Nodes that have a membership of “available” are not returned when a GET request is called when the cluster exists. A query on the “membership” property for available must be provided to scan for nodes on the cluster network. Nodes that have a membership of “available” are returned automatically before a cluster is created. joining - Joining nodes are in the process of being added to the cluster. The node may be progressing through the steps to become a member or might have failed. The job to add the node or create the cluster provides details on the current progress of the node. member - Nodes that are members have successfully joined the cluster. |
DEPENDENT | netapp.nodes.membership[{#NODENAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#NODENAME}: Uptime | The total time, in seconds, that the node has been up. |
DEPENDENT | netapp.nodes.uptime[{#NODENAME}] Preprocessing: - JSONPATH: |
General | {#NODENAME}: Controller over temperature | Specifies whether the hardware is currently operating outside of its recommended temperature range. The hardware shuts down if the temperature exceeds critical thresholds. Possible values: over, normal |
DEPENDENT | netapp.nodes.controller.overtemperature[{#NODENAME}] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:6h |
General | {#ETHPORTNAME}: State | The operational state of the port. Possible values: up, down. |
DEPENDENT | netapp.port.eth.state[{#NODENAME},{#ETHPORTNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#FCPORTNAME}: Description | A description of the FC port. |
DEPENDENT | netapp.port.fc.description[{#NODENAME},{#FCPORTNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#FCPORTNAME}: State | The operational state of the FC port. Possible values: startup - The port is booting up. linknotconnected - The port has finished initialization, but a link with the fabric is not established. online - The port is initialized and a link with the fabric has been established. linkdisconnected - The link was present at one point on this port but is currently not established. offlinedbyuser - The port is administratively disabled. offlinedbysystem - The port is set to offline by the system. This happens when the port encounters too many errors. nodeoffline - The state information for the port cannot be retrieved. The node is offline or inaccessible. |
DEPENDENT | netapp.port.fc.state[{#NODENAME},{#FCPORTNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#DISKNAME}: State | The state of the disk. Possible values: broken, copy, maintenance, partner, pending, present, reconstructing, removed, spare, unfail, zeroing |
DEPENDENT | netapp.disk.state[{#NODENAME},{#DISKNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#ID}: State | The chassis state: ok, error. |
DEPENDENT | netapp.chassis.state[{#ID}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#FRUID}: State | The FRU state: ok, error. |
DEPENDENT | netapp.chassis.fru.state[{#CHASSISID},{#FRUID}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#SVMNAME}: State | SVM state: starting, running, stopping, stopped, deleting. |
DEPENDENT | netapp.svm.state[{#SVMNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#SVMNAME}: Comment | The comment for the SVM. |
DEPENDENT | netapp.svm.comment[{#SVMNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#LUNNAME}: State | The state of the LUN. Normal states for a LUN are online and offline. Other states indicate errors. Possible values: foreignlunerror, nvfail, offline, online, space_error. |
DEPENDENT | netapp.lun.status.state[{#SVMNAME},{#LUNNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#LUNNAME}: Container state | The state of the volume and aggregate that contain the LUN: online, aggregateoffline, volumeoffline. LUNs are only available when their containers are available. |
DEPENDENT | netapp.lun.status.containerstate[{#SVMNAME},{#LUNNAME}] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:6h |
General | {#LUNNAME}: Space size | The total provisioned size of the LUN. |
DEPENDENT | netapp.lun.space.size[{#SVMNAME},{#LUNNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#LUNNAME}: Space used | The amount of space consumed by the main data stream of the LUN. |
DEPENDENT | netapp.lun.space.used[{#SVMNAME},{#LUNNAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#VOLUMENAME}: Comment | A comment for the volume. |
DEPENDENT | netapp.volume.comment[{#VOLUMENAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#VOLUMENAME}: State | Volume state. A volume can only be brought online if it is offline. Taking a volume offline removes its junction path. The 'mixed' state applies to FlexGroup volumes only and cannot be specified as a target state. An 'error' state implies that the volume is not in a state to serve data. |
DEPENDENT | netapp.volume.state[{#VOLUMENAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#VOLUMENAME}: Type | Type of the volume. rw - read-write volume. dp - data-protection volume. ls - load-sharing dp volume. |
DEPENDENT | netapp.volume.type[{#VOLUMENAME}] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
General | {#VOLUMENAME}: SVM name | The volume belongs this SVM. |
DEPENDENT | netapp.volume.svmname[{#VOLUMENAME}] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:6h |
General | {#VOLUMENAME}: Space size | Total provisioned size. The default size is equal to the minimum size of 20MB, in bytes. |
DEPENDENT | netapp.volume.spacesize[{#VOLUMENAME}] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:6h |
General | {#VOLUMENAME}: Available size | The available space, in bytes. |
DEPENDENT | netapp.volume.spaceavailable[{#VOLUMENAME}] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:6h |
General | {#VOLUMENAME}: Used size | The virtual space used (includes volume reserves) before storage efficiency, in bytes. |
DEPENDENT | netapp.volume.spaceused[{#VOLUMENAME}] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:6h |
General | {#VOLUMENAME}: Volume throughput, other rate | Throughput bytes observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.volume.statistics.throughput.other.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume throughput, read rate | Throughput bytes observed at the storage object. Performance metric for read I/O operations. |
DEPENDENT | netapp.volume.statistics.throughput.read.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume throughput, write rate | Throughput bytes observed at the storage object. Performance metric for write I/O operations. |
DEPENDENT | netapp.volume.statistics.throughput.write.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume throughput, total rate | Throughput bytes observed at the storage object. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.volume.statistics.throughput.total.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume IOPS, other rate | The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.volume.statistics.iops.other.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume IOPS, read rate | The number of I/O operations observed at the storage object. Performance metric for read I/O operations. |
DEPENDENT | netapp.volume.statistics.iops.read.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume IOPS, write rate | The number of I/O operations observed at the storage object. Performance metric for write I/O operations. |
DEPENDENT | netapp.volume.statistics.iops.write.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume IOPS, total rate | The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.volume.statistics.iops.total.rate[{#VOLUMENAME}] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
General | {#VOLUMENAME}: Volume latency, other | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
CALCULATED | netapp.volume.statistics.latency.other[{#VOLUMENAME}] Expression: (last(//netapp.volume.statistics.latency_raw.other[{#VOLUMENAME}]) - last(//netapp.volume.statistics.latency_raw.other[{#VOLUMENAME}],#2)) / ( last(//netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}],#2) + (last(//netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}],#2) = 0) ) * 0.001 |
General | {#VOLUMENAME}: Volume latency, read | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for read I/O operations. |
CALCULATED | netapp.volume.statistics.latency.read[{#VOLUMENAME}] Expression: (last(//netapp.volume.statistics.latency_raw.read[{#VOLUMENAME}]) - last(//netapp.volume.statistics.latency_raw.read[{#VOLUMENAME}],#2)) / ( last(//netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}],#2) + (last(//netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}],#2) = 0)) * 0.001 |
General | {#VOLUMENAME}: Volume latency, write | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for write I/O operations. |
CALCULATED | netapp.volume.statistics.latency.write[{#VOLUMENAME}] Expression: (last(//netapp.volume.statistics.latency_raw.write[{#VOLUMENAME}]) - last(//netapp.volume.statistics.latency_raw.write[{#VOLUMENAME}],#2)) / ( last(//netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}],#2) + (last(//netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}],#2) = 0) ) * 0.001 |
General | {#VOLUMENAME}: Volume latency, total | The average latency per I/O operation in milliseconds observed at the storage object. Performance metric aggregated over all types of I/O operations. |
CALCULATED | netapp.volume.statistics.latency.total[{#VOLUMENAME}] Expression: (last(//netapp.volume.statistics.latency_raw.total[{#VOLUMENAME}]) - last(//netapp.volume.statistics.latency_raw.total[{#VOLUMENAME}],#2)) / ( last(//netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}],#2) + (last(//netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}]) - last(//netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}],#2) = 0) ) * 0.001 |
Zabbix raw items | Get cluster | - |
HTTP_AGENT | netapp.cluster.get |
Zabbix raw items | Get nodes | - |
HTTP_AGENT | netapp.nodes.get |
Zabbix raw items | Get disks | - |
HTTP_AGENT | netapp.disks.get |
Zabbix raw items | Get volumes | - |
HTTP_AGENT | netapp.volumes.get |
Zabbix raw items | Get ethernet ports | - |
HTTP_AGENT | netapp.ports.eth.get |
Zabbix raw items | Get FC ports | - |
HTTP_AGENT | netapp.ports.fc.get |
Zabbix raw items | Get SVMs | - |
HTTP_AGENT | netapp.svms.get |
Zabbix raw items | Get LUNs | - |
HTTP_AGENT | netapp.luns.get |
Zabbix raw items | Get chassis | - |
HTTP_AGENT | netapp.chassis.get |
Zabbix raw items | Get FRUs | - |
HTTP_AGENT | netapp.frus.get Preprocessing: - JAVASCRIPT: |
Zabbix raw items | Cluster latency raw, other | The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.cluster.statistics.latency_raw.other Preprocessing: - JSONPATH: |
Zabbix raw items | Cluster latency raw, read | The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for read I/O operations. |
DEPENDENT | netapp.cluster.statistics.latency_raw.read Preprocessing: - JSONPATH: |
Zabbix raw items | Cluster latency raw, write | The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for write I/O operations. |
DEPENDENT | netapp.cluster.statistics.latency_raw.write Preprocessing: - JSONPATH: |
Zabbix raw items | Cluster latency raw, total | The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.cluster.statistics.latency_raw.total Preprocessing: - JSONPATH: |
Zabbix raw items | Cluster IOPS raw, other | The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.cluster.statistics.iops_raw.other Preprocessing: - JSONPATH: |
Zabbix raw items | Cluster IOPS raw, read | The number of I/O operations observed at the storage object. Performance metric for read I/O operations. |
DEPENDENT | netapp.cluster.statistics.iops_raw.read Preprocessing: - JSONPATH: |
Zabbix raw items | Cluster IOPS raw, write | The number of I/O operations observed at the storage object. Performance metric for write I/O operations. |
DEPENDENT | netapp.cluster.statistics.iops_raw.write Preprocessing: - JSONPATH: |
Zabbix raw items | Cluster IOPS raw, total | The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.cluster.statistics.iops_raw.total Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume latency raw, other | The raw latency in microseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.volume.statistics.latency_raw.other[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume latency raw, read | The raw latency in microseconds observed at the storage object. Performance metric for read I/O operations. |
DEPENDENT | netapp.volume.statistics.latency_raw.read[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume latency raw, write | The raw latency in microseconds observed at the storage object. Performance metric for write I/O operations. |
DEPENDENT | netapp.volume.statistics.latency_raw.write[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume latency raw, total | The raw latency in microseconds observed at the storage object. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.volume.statistics.latency_raw.total[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume IOPS raw, other | The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. |
DEPENDENT | netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume IOPS raw, read | The number of I/O operations observed at the storage object. Performance metric for read I/O operations. |
DEPENDENT | netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume IOPS raw, write | The number of I/O operations observed at the storage object. Performance metric for write I/O operations. |
DEPENDENT | netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Zabbix raw items | {#VOLUMENAME}: Volume IOPS raw, total | The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations. |
DEPENDENT | netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}] Preprocessing: - JSONPATH: |
Name | Description | Expression | Severity | Dependencies and additional info |
---|---|---|---|---|
Version has changed | RESOURCE version has changed. Ack to close. |
last(/NetApp AFF A700 by HTTP/netapp.cluster.version,#1)<>last(/NetApp AFF A700 by HTTP/netapp.cluster.version,#2) and length(last(/NetApp AFF A700 by HTTP/netapp.cluster.version))>0 |
INFO | Manual close: YES |
Cluster status is abnormal | Any errors associated with the sample. For example, if the aggregation of data over multiple nodes fails then any of the partial errors might be returned, “ok” on success, or “error” on any internal uncategorized failure. Whenever a sample collection is missed but done at a later time, it is back filled to the previous 15 second timestamp and tagged with "backfilleddata". “Inconsistent deltatime” is encountered when the time between two collections is not the same for all nodes. Therefore, the aggregated value might be over or under inflated. “Negativedelta” is returned when an expected monotonically increasing value has decreased in value. “Inconsistentolddata” is returned when one or more nodes do not have the latest data. |
(last(/NetApp AFF A700 by HTTP/netapp.cluster.status)<>"ok") |
AVERAGE | |
{#NODENAME}: Version has changed | {#NODENAME} version has changed. Ack to close. |
last(/NetApp AFF A700 by HTTP/netapp.node.version[{#NODENAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.node.version[{#NODENAME}],#2) and length(last(/NetApp AFF A700 by HTTP/netapp.node.version[{#NODENAME}]))>0 |
INFO | Manual close: YES |
{#NODENAME}: Node state is abnormal | The state of the node is different from up: booting - Node is booting up. down - Node has stopped or is dumping core. takenover - Node has been taken over by its HA partner and is not yet waiting for giveback. waitingfor_giveback - Node has been taken over by its HA partner and is waiting for the HA partner to giveback disks. degraded - Node has one or more critical services offline. unknown - Node or its HA partner cannot be contacted and there is no information on the node's state. |
(last(/NetApp AFF A700 by HTTP/netapp.nodes.state[{#NODENAME}])<>"up") |
AVERAGE | |
{#NODENAME}: Node has been restarted | Uptime is less than 10 minutes. |
last(/NetApp AFF A700 by HTTP/netapp.nodes.uptime[{#NODENAME}])<10m |
INFO | Manual close: YES |
{#NODENAME}: Node has over temperature | The hardware shuts down if the temperature exceeds critical thresholds(item's value is "over"). |
(last(/NetApp AFF A700 by HTTP/netapp.nodes.controller.over_temperature[{#NODENAME}])<>"normal") |
AVERAGE | |
{#ETHPORTNAME}: Ethernet port of the Node "{#NODENAME}" is down | Something is wrong with the ethernet port. |
(last(/NetApp AFF A700 by HTTP/netapp.port.eth.state[{#NODENAME},{#ETHPORTNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.port.eth.state[{#NODENAME},{#ETHPORTNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.port.eth.state[{#NODENAME},{#ETHPORTNAME}])="down") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.port.eth.state[{#NODENAME},{#ETHPORTNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.port.eth.state[{#NODENAME},{#ETHPORTNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.port.eth.state[{#NODENAME},{#ETHPORTNAME}])="up") |
AVERAGE | Manual close: YES |
{#FCPORTNAME}: FC port of the Node "{#NODENAME}" has state different from "online" | Something is wrong with the FC port. |
(last(/NetApp AFF A700 by HTTP/netapp.port.fc.state[{#NODENAME},{#FCPORTNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.port.fc.state[{#NODENAME},{#FCPORTNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.port.fc.state[{#NODENAME},{#FCPORTNAME}])<>"online") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.port.fc.state[{#NODENAME},{#FCPORTNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.port.fc.state[{#NODENAME},{#FCPORTNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.port.fc.state[{#NODENAME},{#FCPORTNAME}])="online") |
AVERAGE | Manual close: YES |
{#DISKNAME}: Disk of the Node "{#NODENAME}" has state different from "present" | Something is wrong with the disk. |
(last(/NetApp AFF A700 by HTTP/netapp.disk.state[{#NODENAME},{#DISKNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.disk.state[{#NODENAME},{#DISKNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.disk.state[{#NODENAME},{#DISKNAME}])<>"present") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.disk.state[{#NODENAME},{#DISKNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.disk.state[{#NODENAME},{#DISKNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.disk.state[{#NODENAME},{#DISKNAME}])="present") |
AVERAGE | Manual close: YES |
{#ID}: Chassis has something errors | Something is wrong with the chassis. |
(last(/NetApp AFF A700 by HTTP/netapp.chassis.state[{#ID}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.chassis.state[{#ID}],#2) and last(/NetApp AFF A700 by HTTP/netapp.chassis.state[{#ID}])="error") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.chassis.state[{#ID}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.chassis.state[{#ID}],#2) and last(/NetApp AFF A700 by HTTP/netapp.chassis.state[{#ID}])="ok") |
AVERAGE | Manual close: YES |
{#FRUID}: FRU of the chassis "{#ID}" state is error | Something is wrong with the FRU. |
(last(/NetApp AFF A700 by HTTP/netapp.chassis.fru.state[{#CHASSISID},{#FRUID}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.chassis.fru.state[{#CHASSISID},{#FRUID}],#2) and last(/NetApp AFF A700 by HTTP/netapp.chassis.fru.state[{#CHASSISID},{#FRUID}])="error") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.chassis.fru.state[{#CHASSISID},{#FRUID}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.chassis.fru.state[{#CHASSISID},{#FRUID}],#2) and last(/NetApp AFF A700 by HTTP/netapp.chassis.fru.state[{#CHASSISID},{#FRUID}])="ok") |
AVERAGE | Manual close: YES |
{#SVMNAME}: SVM state is abnormal | Something is wrong with the SVM. |
(last(/NetApp AFF A700 by HTTP/netapp.svm.state[{#SVMNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.svm.state[{#SVMNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.svm.state[{#SVMNAME}])<>"running") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.svm.state[{#SVMNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.svm.state[{#SVMNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.svm.state[{#SVMNAME}])="running") |
AVERAGE | Manual close: YES |
{#LUNNAME}: LUN of the SVM "{#SVMNAME}" has abnormal state | Normal states for a LUN are online and offline. Other states indicate errors. |
(last(/NetApp AFF A700 by HTTP/netapp.lun.status.state[{#SVMNAME},{#LUNNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.lun.status.state[{#SVMNAME},{#LUNNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.lun.status.state[{#SVMNAME},{#LUNNAME}])<>"online") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.lun.status.state[{#SVMNAME},{#LUNNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.lun.status.state[{#SVMNAME},{#LUNNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.lun.status.state[{#SVMNAME},{#LUNNAME}])="online") |
AVERAGE | Manual close: YES |
{#LUNNAME}: LUN of the SVM "{#SVMNAME}" has abnormal container state | LUNs are only available when their containers are available. |
(last(/NetApp AFF A700 by HTTP/netapp.lun.status.container_state[{#SVMNAME},{#LUNNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.lun.status.container_state[{#SVMNAME},{#LUNNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.lun.status.container_state[{#SVMNAME},{#LUNNAME}])<>"online") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.lun.status.container_state[{#SVMNAME},{#LUNNAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.lun.status.container_state[{#SVMNAME},{#LUNNAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.lun.status.container_state[{#SVMNAME},{#LUNNAME}])="online") |
AVERAGE | Manual close: YES |
{#VOLUMENAME}: Volume state is abnormal | A volume can only be brought online if it is offline. Taking a volume offline removes its junction path. The 'mixed' state applies to FlexGroup volumes only and cannot be specified as a target state. An 'error' state implies that the volume is not in a state to serve data. |
(last(/NetApp AFF A700 by HTTP/netapp.volume.state[{#VOLUMENAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.volume.state[{#VOLUMENAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.volume.state[{#VOLUMENAME}])<>"online") Recovery expression: (last(/NetApp AFF A700 by HTTP/netapp.volume.state[{#VOLUMENAME}],#1)<>last(/NetApp AFF A700 by HTTP/netapp.volume.state[{#VOLUMENAME}],#2) and last(/NetApp AFF A700 by HTTP/netapp.volume.state[{#VOLUMENAME}])="online") |
AVERAGE | Manual close: YES |
Please report any issues with the template at https://support.zabbix.com
You can also provide feedback, discuss the template or ask for help with it at ZABBIX forums.
For Zabbix version: 6.2 and higher. The template to monitor SAN Huawei OceanStor 5300 V5 by Zabbix SNMP agent.
This template was tested on:
See Zabbix template operation for basic instructions.
1. Create a host for Huawei OceanStor 5300 V5 with controller management IP as SNMPv2 interface.
2. Link the template to the host.
3. Customize macro values if needed.
No specific Zabbix configuration is required.
Name | Description | Default |
---|---|---|
{$CPU.UTIL.CRIT} | The critical threshold of the CPU utilization in %. |
90 |
{$HUAWEI.5300.DISK.TEMP.MAX.TIME} | The time during which temperature of disk may exceed the threshold. |
5m |
{$HUAWEI.5300.DISK.TEMP.MAX.WARN} | Maximum temperature of disk. Can be used with {#MODEL} as context. |
45 |
{$HUAWEI.5300.LUN.IO.TIME.MAX.TIME} | The time during which average I/O response time of LUN may exceed the threshold. |
5m |
{$HUAWEI.5300.LUN.IO.TIME.MAX.WARN} | Maximum average I/O response time of LUN in milliseconds. |
100 |
{$HUAWEI.5300.MEM.MAX.TIME} | The time during which memory usage may exceed the threshold. |
5m |
{$HUAWEI.5300.MEM.MAX.WARN} | Maximum percentage of memory used |
90 |
{$HUAWEI.5300.NODE.IO.DELAY.MAX.TIME} | The time during which average I/O latency of node may exceed the threshold. |
5m |
{$HUAWEI.5300.NODE.IO.DELAY.MAX.WARN} | Maximum average I/O latency of node in milliseconds. |
20 |
{$HUAWEI.5300.POOL.CAPACITY.THRESH.TIME} | The time during which free capacity may exceed the {#THRESHOLD} from hwInfoStoragePoolFullThreshold. |
5m |
{$HUAWEI.5300.TEMP.MAX.TIME} | The time during which temperature of enclosure may exceed the threshold. |
3m |
{$HUAWEI.5300.TEMP.MAX.WARN} | Maximum temperature of enclosure |
35 |
{$ICMPLOSSWARN} | - |
20 |
{$ICMPRESPONSETIME_WARN} | - |
0.15 |
{$SNMP.TIMEOUT} | - |
5m |
There are no template links in this template.
Name | Description | Type | Key and additional info |
---|---|---|---|
BBU discovery | Discovery of BBU |
SNMP | huawei.5300.bbu.discovery |
Controllers discovery | Discovery of controllers |
SNMP | huawei.5300.controllers.discovery |
Disks discovery | Discovery of disks |
SNMP | huawei.5300.disks.discovery |
Enclosure discovery | Discovery of enclosures |
SNMP | huawei.5300.enclosure.discovery |
FANs discovery | Discovery of FANs |
SNMP | huawei.5300.fan.discovery |
LUNs discovery | Discovery of LUNs |
SNMP | huawei.5300.lun.discovery |
Nodes performance discovery | Discovery of nodes performance counters |
SNMP | huawei.5300.nodes.discovery |
Storage pools discovery | Discovery of storage pools |
SNMP | huawei.5300.pool.discovery |
Group | Name | Description | Type | Key and additional info |
---|---|---|---|---|
CPU | Controller {#ID}: CPU utilization | CPU usage of a controller {#ID}. |
SNMP | huawei.5300.v5[hwInfoControllerCPUUsage, "{#ID}"] |
CPU | Node {#NODE}: CPU utilization | CPU usage of the node {#NODE}. |
SNMP | huawei.5300.v5[hwPerfNodeCPUUsage, "{#NODE}"] |
General | SNMP traps (fallback) | The item is used to collect all SNMP traps unmatched by other snmptrap items |
SNMP_TRAP | snmptrap.fallback |
General | System location | MIB: SNMPv2-MIB The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string. |
SNMP | system.location[sysLocation.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System contact details | MIB: SNMPv2-MIB The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string. |
SNMP | system.contact[sysContact.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System object ID | MIB: SNMPv2-MIB The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining |
SNMP | system.objectid[sysObjectID.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System name | MIB: SNMPv2-MIB An administratively-assigned name for this managed node.By convention, this is the node's fully-qualified domain name. If the name is unknown, the value is the zero-length string. |
SNMP | system.name Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
General | System description | MIB: SNMPv2-MIB A textual description of the entity. This value should include the full name and version identification of the system's hardware type, software operating-system, and networking software. |
SNMP | system.descr[sysDescr.0] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | OceanStor 5300 V5: Status | System running status. |
SNMP | huawei.5300.v5[status] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | OceanStor 5300 V5: Version | The device version. |
SNMP | huawei.5300.v5[version] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | OceanStor 5300 V5: Capacity total | Total capacity of a device. |
SNMP | huawei.5300.v5[totalCapacity] Preprocessing: - MULTIPLIER: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | OceanStor 5300 V5: Capacity used | Used capacity of a device. |
SNMP | huawei.5300.v5[usedCapacity] Preprocessing: - MULTIPLIER: |
Huawei | Controller {#ID}: Memory utilization | Memory usage of a controller {#ID}. |
SNMP | huawei.5300.v5[hwInfoControllerMemoryUsage, "{#ID}"] |
Huawei | Controller {#ID}: Health status | Controller health status. For details, see definition of Enum Values (HEALTHSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoControllerHealthStatus, "{#ID}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Controller {#ID}: Running status | Controller running status. For details, see definition of Enum Values (RUNNINGSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoControllerRunningStatus, "{#ID}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Controller {#ID}: Role | Controller role.. |
SNMP | huawei.5300.v5[hwInfoControllerRole, "{#ID}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Enclosure {#NAME}: Health status | Enclosure health status. For details, see definition of Enum Values (HEALTHSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoEnclosureHealthStatus, "{#NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Enclosure {#NAME}: Running status | Enclosure running status. For details, see definition of Enum Values (RUNNINGSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoEnclosureRunningStatus, "{#NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Enclosure {#NAME}: Temperature | Enclosure temperature. |
SNMP | huawei.5300.v5[hwInfoEnclosureTemperature, "{#NAME}"] |
Huawei | FAN {#ID} on {#LOCATION}: Health status | Health status of a fan. For details, see definition of Enum Values (HEALTHSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoFanHealthStatus, "{#ID}:{#LOCATION}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | FAN {#ID} on {#LOCATION}: Running status | Operating status of a fan. For details, see definition of Enum Values (RUNNINGSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoFanRunningStatus, "{#ID}:{#LOCATION}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | BBU {#ID} on {#LOCATION}: Health status | Health status of a BBU. For details, see definition of Enum Values (HEALTHSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoBBUHealthStatus, "{#ID}:{#LOCATION}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | BBU {#ID} on {#LOCATION}: Running status | Running status of a BBU. For details, see definition of Enum Values (RUNNINGSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoBBURunningStatus, "{#ID}:{#LOCATION}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Disk {#MODEL} on {#LOCATION}: Health status | Disk health status. For details, see definition of Enum Values (HEALTHSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoDiskHealthStatus, "{#ID}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Disk {#MODEL} on {#LOCATION}: Running status | Disk running status. For details, see definition of Enum Values (RUNNINGSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoDiskRunningStatus, "{#ID}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Disk {#MODEL} on {#LOCATION}: Temperature | Disk temperature. |
SNMP | huawei.5300.v5[hwInfoDiskTemperature, "{#ID}"] |
Huawei | Disk {#MODEL} on {#LOCATION}: Health score | Health score of a disk. If the value is 255, indicating invalid. |
SNMP | huawei.5300.v5[hwInfoDiskHealthMark, "{#ID}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Node {#NODE}: Average I/O latency | Average I/O latency of the node. |
SNMP | huawei.5300.v5[hwPerfNodeDelay, "{#NODE}"] |
Huawei | Node {#NODE}: Total I/O per second | Total IOPS of the node. |
SNMP | huawei.5300.v5[hwPerfNodeTotalIOPS, "{#NODE}"] |
Huawei | Node {#NODE}: Read operations per second | Read IOPS of the node. |
SNMP | huawei.5300.v5[hwPerfNodeReadIOPS, "{#NODE}"] |
Huawei | Node {#NODE}: Write operations per second | Write IOPS of the node. |
SNMP | huawei.5300.v5[hwPerfNodeWriteIOPS, "{#NODE}"] |
Huawei | Node {#NODE}: Total traffic per second | Total bandwidth for the node. |
SNMP | huawei.5300.v5[hwPerfNodeTotalTraffic, "{#NODE}"] Preprocessing: - MULTIPLIER: |
Huawei | Node {#NODE}: Read traffic per second | Read bandwidth for the node. |
SNMP | huawei.5300.v5[hwPerfNodeReadTraffic, "{#NODE}"] Preprocessing: - MULTIPLIER: |
Huawei | Node {#NODE}: Write traffic per second | Write bandwidth for the node. |
SNMP | huawei.5300.v5[hwPerfNodeWriteTraffic, "{#NODE}"] Preprocessing: - MULTIPLIER: |
Huawei | LUN {#NAME}: Status | Status of the LUN. |
SNMP | huawei.5300.v5[hwStorageLunStatus, "{#NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | LUN {#NAME}: Average total I/O latency | Average I/O latency of the node in milliseconds. |
SNMP | huawei.5300.v5[hwPerfLunAverageIOResponseTime, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | LUN {#NAME}: Average read I/O latency | Average read I/O response time in milliseconds. |
SNMP | huawei.5300.v5[hwPerfLunAverageReadIOLatency, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | LUN {#NAME}: Average write I/O latency | Average write I/O response time in milliseconds. |
SNMP | huawei.5300.v5[hwPerfLunAverageWriteIOLatency, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | LUN {#NAME}: Total I/O per second | Current IOPS of the LUN. |
SNMP | huawei.5300.v5[hwPerfLunTotalIOPS, "{#NAME}"] |
Huawei | LUN {#NAME}: Read operations per second | Read IOPS of the node. |
SNMP | huawei.5300.v5[hwPerfLunReadIOPS, "{#NAME}"] |
Huawei | LUN {#NAME}: Write operations per second | Write IOPS of the node. |
SNMP | huawei.5300.v5[hwPerfLunWriteIOPS, "{#NAME}"] |
Huawei | LUN {#NAME}: Total traffic per second | Current total bandwidth for the LUN. |
SNMP | huawei.5300.v5[hwPerfLunTotalTraffic, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | LUN {#NAME}: Read traffic per second | Current read bandwidth for the LUN. |
SNMP | huawei.5300.v5[hwPerfLunReadTraffic, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | LUN {#NAME}: Write traffic per second | Current write bandwidth for the LUN. |
SNMP | huawei.5300.v5[hwPerfLunWriteTraffic, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | LUN {#NAME}: Capacity | Capacity of the LUN. |
SNMP | huawei.5300.v5[hwStorageLunCapacity, "{#NAME}"] Preprocessing: - MULTIPLIER: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Pool {#NAME}: Health status | Health status of a storage pool. For details, see definition of Enum Values (HEALTHSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoStoragePoolHealthStatus, "{#NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Pool {#NAME}: Running status | Operating status of a storage pool. For details, see definition of Enum Values (RUNNINGSTATUSE). https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference |
SNMP | huawei.5300.v5[hwInfoStoragePoolRunningStatus, "{#NAME}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Pool {#NAME}: Capacity total | Total capacity of a storage pool. |
SNMP | huawei.5300.v5[hwInfoStoragePoolTotalCapacity, "{#NAME}"] Preprocessing: - MULTIPLIER: - DISCARDUNCHANGEDHEARTBEAT: |
Huawei | Pool {#NAME}: Capacity free | Available capacity of a storage pool. |
SNMP | huawei.5300.v5[hwInfoStoragePoolFreeCapacity, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | Pool {#NAME}: Capacity used | Used capacity of a storage pool. |
SNMP | huawei.5300.v5[hwInfoStoragePoolSubscribedCapacity, "{#NAME}"] Preprocessing: - MULTIPLIER: |
Huawei | Pool {#NAME}: Capacity used percentage | Used capacity of a storage pool in percents. |
CALCULATED | huawei.5300.v5[hwInfoStoragePoolFreeCapacityPct, "{#NAME}"] Expression: last(//huawei.5300.v5[hwInfoStoragePoolSubscribedCapacity, "{#NAME}"])/last(//huawei.5300.v5[hwInfoStoragePoolTotalCapacity, "{#NAME}"])*100 |
Status | Uptime (network) | MIB: SNMPv2-MIB The time (in hundredths of a second) since the network management portion of the system was last re-initialized. |
SNMP | system.net.uptime[sysUpTime.0] Preprocessing: - MULTIPLIER: |
Status | Uptime (hardware) | MIB: HOST-RESOURCES-MIB The amount of time since this host was last initialized. Note that this is different from sysUpTime in the SNMPv2-MIB [RFC1907] because sysUpTime is the uptime of the network management portion of the system. |
SNMP | system.hw.uptime[hrSystemUptime.0] Preprocessing: - CHECKNOTSUPPORTED ⛔️ON_FAIL: - MULTIPLIER: |
Status | SNMP agent availability | Availability of SNMP checks on the host. The value of this item corresponds to availability icons in the host list. Possible value: 0 - not available 1 - available 2 - unknown |
INTERNAL | zabbix[host,snmp,available] |
Status | ICMP ping | - |
SIMPLE | icmpping |
Status | ICMP loss | - |
SIMPLE | icmppingloss |
Status | ICMP response time | - |
SIMPLE | icmppingsec |
Name | Description | Expression | Severity | Dependencies and additional info |
---|---|---|---|---|
Controller {#ID}: High CPU utilization | CPU utilization is too high. The system might be slow to respond. |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoControllerCPUUsage, "{#ID}"],5m)>{$CPU.UTIL.CRIT} |
WARNING | |
Node {#NODE}: High CPU utilization | CPU utilization is too high. The system might be slow to respond. |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwPerfNodeCPUUsage, "{#NODE}"],5m)>{$CPU.UTIL.CRIT} |
WARNING | |
System name has changed | System name has changed. Ack to close. |
last(/Huawei OceanStor 5300 V5 by SNMP/system.name,#1)<>last(/Huawei OceanStor 5300 V5 by SNMP/system.name,#2) and length(last(/Huawei OceanStor 5300 V5 by SNMP/system.name))>0 |
INFO | Manual close: YES |
OceanStor 5300 V5: Storage version has been changed | OceanStor 5300 V5 version has changed. Ack to close. |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[version],#1)<>last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[version],#2) and length(last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[version]))>0 |
INFO | Manual close: YES |
Controller {#ID}: Memory usage is too high | - |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoControllerMemoryUsage, "{#ID}"],{$HUAWEI.5300.MEM.MAX.TIME})>{$HUAWEI.5300.MEM.MAX.WARN} |
AVERAGE | |
Controller {#ID}: Health status is not Normal | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoControllerHealthStatus, "{#ID}"])<>1 |
HIGH | |
Controller {#ID}: Running status is not Online | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoControllerRunningStatus, "{#ID}"])<>27 |
AVERAGE | |
Controller {#ID}: Role has been changed | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoControllerRole, "{#ID}"],#1)<>last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoControllerRole, "{#ID}"],#2) |
WARNING | Manual close: YES |
Enclosure {#NAME}: Health status is not Normal | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoEnclosureHealthStatus, "{#NAME}"])<>1 |
HIGH | |
Enclosure {#NAME}: Running status is not Online | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoEnclosureRunningStatus, "{#NAME}"])<>27 |
AVERAGE | |
Enclosure {#NAME}: Temperature is too high | - |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoEnclosureTemperature, "{#NAME}"],{$HUAWEI.5300.TEMP.MAX.TIME})>{$HUAWEI.5300.TEMP.MAX.WARN} |
HIGH | |
FAN {#ID} on {#LOCATION}: Health status is not Normal | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoFanHealthStatus, "{#ID}:{#LOCATION}"])<>1 |
HIGH | |
FAN {#ID} on {#LOCATION}: Running status is not Running | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoFanRunningStatus, "{#ID}:{#LOCATION}"])<>2 |
AVERAGE | |
BBU {#ID} on {#LOCATION}: Health status is not Normal | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoBBUHealthStatus, "{#ID}:{#LOCATION}"])<>1 |
HIGH | |
BBU {#ID} on {#LOCATION}: Running status is not Online | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoBBURunningStatus, "{#ID}:{#LOCATION}"])<>2 |
AVERAGE | |
Disk {#MODEL} on {#LOCATION}: Health status is not Normal | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoDiskHealthStatus, "{#ID}"])<>1 |
HIGH | |
Disk {#MODEL} on {#LOCATION}: Running status is not Online | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoDiskRunningStatus, "{#ID}"])<>27 |
AVERAGE | |
Disk {#MODEL} on {#LOCATION}: Temperature is too high | - |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoDiskTemperature, "{#ID}"],{$HUAWEI.5300.DISK.TEMP.MAX.TIME})>{$HUAWEI.5300.DISK.TEMP.MAX.WARN:"{#MODEL}"} |
HIGH | |
Node {#NODE}: Average I/O latency is too high | - |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwPerfNodeDelay, "{#NODE}"],{$HUAWEI.5300.NODE.IO.DELAY.MAX.TIME})>{$HUAWEI.5300.NODE.IO.DELAY.MAX.WARN} |
WARNING | |
LUN {#NAME}: Status is not Normal | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwStorageLunStatus, "{#NAME}"])<>1 |
AVERAGE | |
LUN {#NAME}: Average I/O response time is too high | - |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwPerfLunAverageIOResponseTime, "{#NAME}"],{$HUAWEI.5300.LUN.IO.TIME.MAX.TIME})>{$HUAWEI.5300.LUN.IO.TIME.MAX.WARN} |
WARNING | |
Pool {#NAME}: Health status is not Normal | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoStoragePoolHealthStatus, "{#NAME}"])<>1 |
HIGH | |
Pool {#NAME}: Running status is not Online | - |
last(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoStoragePoolRunningStatus, "{#NAME}"])<>27 |
AVERAGE | |
Pool {#NAME}: Used capacity is too high | - |
min(/Huawei OceanStor 5300 V5 by SNMP/huawei.5300.v5[hwInfoStoragePoolFreeCapacityPct, "{#NAME}"],{$HUAWEI.5300.POOL.CAPACITY.THRESH.TIME})>{#THRESHOLD} |
AVERAGE | |
Host has been restarted | Uptime is less than 10 minutes. |
(last(/Huawei OceanStor 5300 V5 by SNMP/system.hw.uptime[hrSystemUptime.0])>0 and last(/Huawei OceanStor 5300 V5 by SNMP/system.hw.uptime[hrSystemUptime.0])<10m) or (last(/Huawei OceanStor 5300 V5 by SNMP/system.hw.uptime[hrSystemUptime.0])=0 and last(/Huawei OceanStor 5300 V5 by SNMP/system.net.uptime[sysUpTime.0])<10m) |
WARNING | Manual close: YES Depends on: - No SNMP data collection |
No SNMP data collection | SNMP is not available for polling. Please check device connectivity and SNMP settings. |
max(/Huawei OceanStor 5300 V5 by SNMP/zabbix[host,snmp,available],{$SNMP.TIMEOUT})=0 |
WARNING | Depends on: - Unavailable by ICMP ping |
Unavailable by ICMP ping | Last three attempts returned timeout. Please check device connectivity. |
max(/Huawei OceanStor 5300 V5 by SNMP/icmpping,#3)=0 |
HIGH | |
High ICMP ping loss | - |
min(/Huawei OceanStor 5300 V5 by SNMP/icmppingloss,5m)>{$ICMP_LOSS_WARN} and min(/Huawei OceanStor 5300 V5 by SNMP/icmppingloss,5m)<100 |
WARNING | Depends on: - Unavailable by ICMP ping |
High ICMP ping response time | - |
avg(/Huawei OceanStor 5300 V5 by SNMP/icmppingsec,5m)>{$ICMP_RESPONSE_TIME_WARN} |
WARNING | Depends on: - High ICMP ping loss - Unavailable by ICMP ping |
Please report any issues with the template at https://support.zabbix.com.
You can also provide feedback, discuss the template, or ask for help at ZABBIX forums.
For Zabbix version: 6.2 and higher
The template to monitor HPE Primera by HTTP.
It works without any external scripts and uses the script item.
This template was tested on:
See Zabbix template operation for basic instructions.
startwsapi
.
To check WSAPI state use command: showwsapi
.No specific Zabbix configuration is required.
Name | Description | Default | ||
---|---|---|---|---|
{$HPE.PRIMERA.API.PASSWORD} | Specify password for WSAPI. |
`` | ||
{$HPE.PRIMERA.API.PORT} | The WSAPI port. |
443 |
||
{$HPE.PRIMERA.API.SCHEME} | The WSAPI scheme (http/https). |
https |
||
{$HPE.PRIMERA.API.USERNAME} | Specify user name for WSAPI. |
zabbix |
||
{$HPE.PRIMERA.CPG.NAME.MATCHES} | This macro is used in filters of CPGs discovery rule. |
.* |
||
{$HPE.PRIMERA.CPG.NAME.NOT_MATCHES} | This macro is used in filters of CPGs discovery rule. |
CHANGE_IF_NEEDED |
||
{$HPE.PRIMERA.DATA.TIMEOUT} | Response timeout for WSAPI. |
15s |
||
{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.MATCHES} | Filter of discoverable tasks by name. |
CHANGE_IF_NEEDED |
||
{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.NOT_MATCHES} | Filter to exclude discovered tasks by name. |
.* |
||
{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.MATCHES} | Filter of discoverable tasks by type. |
.* |
||
{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.NOT_MATCHES} | Filter to exclude discovered tasks by type. |
CHANGE_IF_NEEDED |
||
{$HPE.PRIMERA.VOLUME.NAME.MATCHES} | This macro is used in filters of volume discovery rule. |
.* |
||
{$HPE.PRIMERA.VOLUME.NAME.NOT_MATCHES} | This macro is used in filters of volume discovery rule. |
`^(admin | .srdata | .mgmtdata)$` |
There are no template links in this template.
Name | Description | Type | Key and additional info |
---|---|---|---|
Common provisioning groups discovery | List of CPGs resources. |
DEPENDENT | hpe.primera.cpg.discovery Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: Filter: AND- {#NAME} MATCHESREGEX - {#NAME} NOTMATCHES_REGEX |
Disks discovery | List of physical disk resources. |
DEPENDENT | hpe.primera.disks.discovery Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
Hosts discovery | List of host properties. |
DEPENDENT | hpe.primera.hosts.discovery Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: Filter: AND- {#NAME} EXISTS `` |
Ports discovery | List of ports. |
DEPENDENT | hpe.primera.ports.discovery Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: Filter: AND- {#TYPE} NOTMATCHESREGEX |
Tasks discovery | List of tasks started within last 24 hours. |
DEPENDENT | hpe.primera.tasks.discovery Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Filter: AND- {#NAME} MATCHESREGEX - {#NAME} NOTMATCHESREGEX - {#TYPE} MATCHESREGEX - {#TYPE} NOTMATCHESREGEX |
Volumes discovery | List of storage volume resources. |
DEPENDENT | hpe.primera.volumes.discovery Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: Filter: AND- {#NAME} MATCHESREGEX - {#NAME} NOTMATCHES_REGEX |
Group | Name | Description | Type | Key and additional info |
---|---|---|---|---|
HPE | HPE Primera: Get data | The JSON with result of WSAPI requests. |
SCRIPT | hpe.primera.get.data Expression: The text is too long. Please see the template. |
HPE | HPE Primera: Get errors | A list of errors from WSAPI requests. |
DEPENDENT | hpe.primera.get.errors Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | HPE Primera: Get disks data | Disks data. |
DEPENDENT | hpe.primera.get.disks Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE Primera: Get CPGs data | Common provisioning groups data. |
DEPENDENT | hpe.primera.get.cpgs Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE Primera: Get hosts data | Hosts data. |
DEPENDENT | hpe.primera.get.hosts Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE Primera: Get ports data | Ports data. |
DEPENDENT | hpe.primera.get.ports Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE Primera: Get system data | System data. |
DEPENDENT | hpe.primera.get.system Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE Primera: Get tasks data | Tasks data. |
DEPENDENT | hpe.primera.get.tasks Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE Primera: Get volumes data | Volumes data. |
DEPENDENT | hpe.primera.get.volumes Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE Primera: Capacity allocated | Allocated capacity in the system. |
DEPENDENT | hpe.primera.system.capacity.allocated Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | HPE Primera: Chunklet size | Chunklet size. |
DEPENDENT | hpe.primera.system.chunklet.size Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | HPE Primera: System contact | Contact of the system. |
DEPENDENT | hpe.primera.system.contact Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | HPE Primera: Capacity failed | Failed capacity in the system. |
DEPENDENT | hpe.primera.system.capacity.failed Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | HPE Primera: Capacity free | Free capacity in the system. |
DEPENDENT | hpe.primera.system.capacity.free Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | HPE Primera: System location | Location of the system. |
DEPENDENT | hpe.primera.system.location Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | HPE Primera: Model | System model. |
DEPENDENT | hpe.primera.system.model Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | HPE Primera: System name | System name. |
DEPENDENT | hpe.primera.system.name Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | HPE Primera: Serial number | System serial number. |
DEPENDENT | hpe.primera.system.serialnumber Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | HPE Primera: Software version number | Storage system software version number. |
DEPENDENT | hpe.primera.system.swversion Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | HPE Primera: Capacity total | Total capacity in the system. |
DEPENDENT | hpe.primera.system.capacity.total Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | HPE Primera: Nodes total | Total number of nodes in the system. |
DEPENDENT | hpe.primera.system.nodes.total Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | HPE Primera: Nodes online | Number of online nodes in the system. |
DEPENDENT | hpe.primera.system.nodes.online Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | HPE Primera: Disks total | Number of physical disks. |
DEPENDENT | hpe.primera.disks.total Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | HPE Primera: Service ping | Checks if the service is running and accepting TCP connections. |
SIMPLE | net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | CPG [{#NAME}]: Get CPG data | CPG {#NAME} data |
DEPENDENT | hpe.primera.cpg["{#ID}",data] Preprocessing: - JSONPATH: |
HPE | CPG [{#NAME}]: Degraded state | Detailed state of the CPG: LDSNOTSTARTED (1) - LDs not started. NOTSTARTED (2) - VV not started. NEEDSCHECK (3) - check for consistency. NEEDSMAINTCHECK (4) - maintenance check is required. INTERNALCONSISTENCYERROR (5) - internal consistency error. SNAPDATAINVALID (6) - invalid snapshot data. PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data. STALE (8) - parts of the VV contain old data because of a copy-on-write operation. COPYFAILED (9) - a promote or copy operation to this volume failed. DEGRADEDAVAIL (10) - degraded due to availability. DEGRADEDPERF (11) - degraded due to performance. PROMOTING (12) - volume is the current target of a promote operation. COPYTARGET (13) - volume is the current target of a physical copy operation. RESYNCTARGET (14) - volume is the current target of a resynchronized copy operation. TUNING (15) - volume tuning is in progress. CLOSING (16) - volume is closing. REMOVING (17) - removing the volume. REMOVINGRETRY (18) - retrying a volume removal operation. CREATING (19) - creating a volume. COPYSOURCE (20) - copy source. IMPORTING (21) - importing a volume. CONVERTING (22) - converting a volume. INVALID (23) - invalid. EXCLUSIVE (24) - local storage system has exclusive access to the volume. CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set. STANDBY (26) - volume in standby mode. SDMETAINCONSISTENT (27) - SD Meta Inconsistent. SDNEEDSFIX (28) - SD needs fix. SDMETAFIXING (29) - SD meta fix. UNKNOWN (999) - unknown state. NOTSUPPORTEDBY_WSAPI (1000) - state not supported by WSAPI. |
DEPENDENT | hpe.primera.cpg.state["{#ID}",degraded] Preprocessing: - JSONPATH: |
HPE | CPG [{#NAME}]: Failed state | Detailed state of the CPG: LDSNOTSTARTED (1) - LDs not started. NOTSTARTED (2) - VV not started. NEEDSCHECK (3) - check for consistency. NEEDSMAINTCHECK (4) - maintenance check is required. INTERNALCONSISTENCYERROR (5) - internal consistency error. SNAPDATAINVALID (6) - invalid snapshot data. PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data. STALE (8) - parts of the VV contain old data because of a copy-on-write operation. COPYFAILED (9) - a promote or copy operation to this volume failed. DEGRADEDAVAIL (10) - degraded due to availability. DEGRADEDPERF (11) - degraded due to performance. PROMOTING (12) - volume is the current target of a promote operation. COPYTARGET (13) - volume is the current target of a physical copy operation. RESYNCTARGET (14) - volume is the current target of a resynchronized copy operation. TUNING (15) - volume tuning is in progress. CLOSING (16) - volume is closing. REMOVING (17) - removing the volume. REMOVINGRETRY (18) - retrying a volume removal operation. CREATING (19) - creating a volume. COPYSOURCE (20) - copy source. IMPORTING (21) - importing a volume. CONVERTING (22) - converting a volume. INVALID (23) - invalid. EXCLUSIVE (24) - local storage system has exclusive access to the volume. CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set. STANDBY (26) - volume in standby mode. SDMETAINCONSISTENT (27) - SD Meta Inconsistent. SDNEEDSFIX (28) - SD needs fix. SDMETAFIXING (29) - SD meta fix. UNKNOWN (999) - unknown state. NOTSUPPORTEDBY_WSAPI (1000) - state not supported by WSAPI. |
DEPENDENT | hpe.primera.cpg.state["{#ID}",failed] Preprocessing: - JSONPATH: - JAVASCRIPT: |
HPE | CPG [{#NAME}]: CPG space: Free | Free CPG space. |
DEPENDENT | hpe.primera.cpg.space["{#ID}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Number of FPVVs | Number of FPVVs (Fully Provisioned Virtual Volumes) allocated in the CPG. |
DEPENDENT | hpe.primera.cpg.fpvv["{#ID}",count] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | CPG [{#NAME}]: Number of TPVVs | Number of TPVVs (Thinly Provisioned Virtual Volumes) allocated in the CPG. |
DEPENDENT | hpe.primera.cpg.tpvv["{#ID}",count] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | CPG [{#NAME}]: Number of TDVVs | Number of TDVVs (Thinly Deduplicated Virtual Volume) created in the CPG. |
DEPENDENT | hpe.primera.cpg.tdvv["{#ID}",count] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | CPG [{#NAME}]: Raw space: Free | Raw free space. |
DEPENDENT | hpe.primera.cpg.space.raw["{#ID}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Raw space: Shared | Raw shared space. |
DEPENDENT | hpe.primera.cpg.space.raw["{#ID}",shared] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Raw space: Total | Raw total space. |
DEPENDENT | hpe.primera.cpg.space.raw["{#ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: CPG space: Shared | Shared CPG space. |
DEPENDENT | hpe.primera.cpg.space["{#ID}",shared] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: State | Overall state of the CPG: NORMAL (1) - normal operation; DEGRADED (2) - degraded state; FAILED (3) - abnormal operation; UNKNOWN (99) - unknown state. |
DEPENDENT | hpe.primera.cpg.state["{#ID}"] Preprocessing: - JSONPATH: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot administration: Total (raw) | Total physical (raw) logical disk space in snapshot administration. |
DEPENDENT | hpe.primera.cpg.space.sa["{#ID}",rawtotal] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot data: Total (raw) | Total physical (raw) logical disk space in snapshot data space. |
DEPENDENT | hpe.primera.cpg.space.sd["{#ID}",rawtotal] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: User space: Total (raw) | Total physical (raw) logical disk space in user data space. |
DEPENDENT | hpe.primera.cpg.space.usr["{#ID}",rawtotal] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot administration: Total | Total logical disk space in snapshot administration. |
DEPENDENT | hpe.primera.cpg.space.sa["{#ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot data: Total | Total logical disk space in snapshot data space. |
DEPENDENT | hpe.primera.cpg.space.sd["{#ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: User space: Total | Total logical disk space in user data space. |
DEPENDENT | hpe.primera.cpg.space.usr["{#ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: CPG space: Total | Total CPG space. |
DEPENDENT | hpe.primera.cpg.space["{#ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot administration: Used (raw) | Amount of physical (raw) logical disk used in snapshot administration. |
DEPENDENT | hpe.primera.cpg.space.sa["{#ID}",rawused] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:10m - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot data: Used (raw) | Amount of physical (raw) logical disk used in snapshot data space. |
DEPENDENT | hpe.primera.cpg.space.sd["{#ID}",rawused] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:10m - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: User space: Used (raw) | Amount of physical (raw) logical disk used in user data space. |
DEPENDENT | hpe.primera.cpg.space.usr["{#ID}",rawused] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:10m - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot administration: Used | Amount of logical disk used in snapshot administration. |
DEPENDENT | hpe.primera.cpg.space.sa["{#ID}",used] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: Snapshot data: Used | Amount of logical disk used in snapshot data space. |
DEPENDENT | hpe.primera.cpg.space.sd["{#ID}",used] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | CPG [{#NAME}]: Logical disk space: User space: Used | Amount of logical disk used in user data space. |
DEPENDENT | hpe.primera.cpg.space.usr["{#ID}",used] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Disk [{#POSITION}]: Get disk data | Disk [{#POSITION}] data |
DEPENDENT | hpe.primera.disk["{#ID}",data] Preprocessing: - JSONPATH: |
HPE | Disk [{#POSITION}]: Firmware version | Physical disk firmware version. |
DEPENDENT | hpe.primera.disk["{#ID}",fwversion] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Disk [{#POSITION}]: Free size | Physical disk free size. |
DEPENDENT | hpe.primera.disk["{#ID}",freesize] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:10m - MULTIPLIER: |
HPE | Disk [{#POSITION}]: Manufacturer | Physical disk manufacturer. |
DEPENDENT | hpe.primera.disk["{#ID}",manufacturer] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#POSITION}]: Model | Manufacturer's device ID for disk. |
DEPENDENT | hpe.primera.disk["{#ID}",model] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#POSITION}]: Path A0 degraded | Indicates if this is a degraded path for the disk. |
DEPENDENT | hpe.primera.disk["{#ID}",loopa0degraded] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGEDHEARTBEAT: - BOOLTO_DECIMAL |
HPE | Disk [{#POSITION}]: Path A1 degraded | Indicates if this is a degraded path for the disk. |
DEPENDENT | hpe.primera.disk["{#ID}",loopa1degraded] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGEDHEARTBEAT: - BOOLTO_DECIMAL |
HPE | Disk [{#POSITION}]: Path B0 degraded | Indicates if this is a degraded path for the disk. |
DEPENDENT | hpe.primera.disk["{#ID}",loopb0degraded] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGEDHEARTBEAT: - BOOLTO_DECIMAL |
HPE | Disk [{#POSITION}]: Path B1 degraded | Indicates if this is a degraded path for the disk. |
DEPENDENT | hpe.primera.disk["{#ID}",loopb1degraded] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGEDHEARTBEAT: - BOOLTO_DECIMAL |
HPE | Disk [{#POSITION}]: RPM | RPM of the physical disk. |
DEPENDENT | hpe.primera.disk["{#ID}",rpm] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#POSITION}]: Serial number | Disk drive serial number. |
DEPENDENT | hpe.primera.disk["{#ID}",serialnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Disk [{#POSITION}]: State | State of the physical disk: Normal (1) - physical disk is in Normal state; Degraded (2) - physical disk is not operating normally; New (3) - physical disk is new, needs to be admitted; Failed (4) - physical disk has failed; Unknown (99) - physical disk state is unknown. |
DEPENDENT | hpe.primera.disk["{#ID}",state] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#POSITION}]: Total size | Physical disk total size. |
DEPENDENT | hpe.primera.disk["{#ID}",totalsize] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h - MULTIPLIER: |
HPE | Host [{#NAME}]: Get host data | Host [{#NAME}] data |
DEPENDENT | hpe.primera.host["{#ID}",data] Preprocessing: - JSONPATH: |
HPE | Host [{#NAME}]: Comment | Additional information for the host. |
DEPENDENT | hpe.primera.host["{#ID}",comment] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Host [{#NAME}]: Contact | The host's owner and contact. |
DEPENDENT | hpe.primera.host["{#ID}",contact] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Host [{#NAME}]: IP address | The host's IP address. |
DEPENDENT | hpe.primera.host["{#ID}",ipaddress] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Host [{#NAME}]: Location | The host's location. |
DEPENDENT | hpe.primera.host["{#ID}",location] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Host [{#NAME}]: Model | The host's model. |
DEPENDENT | hpe.primera.host["{#ID}",model] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Host [{#NAME}]: OS | The operating system running on the host. |
DEPENDENT | hpe.primera.host["{#ID}",os] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Get port data | Port [{#NODE}:{#SLOT}:{#CARD.PORT}] data |
DEPENDENT | hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",data] Preprocessing: - JSONPATH: |
HPE | Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state | The state of the failover operation, shown for the two ports indicated in the N:S:P and Partner columns. The value can be one of the following: none (1) - no failover in operation; failoverpending (2) - in the process of failing over to partner; failedover (3) - failed over to partner; active (4) - the partner port is failed over to this port; activedown (5) - the partner port is failed over to this port, but this port is down; activefailed (6) - the partner port is failed over to this port, but this port is down; failback_pending (7) - in the process of failing back from partner. |
DEPENDENT | hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failoverstate] Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state | Port link state: CONFIGWAIT (1) - configuration wait; ALPAWAIT (2) - ALPA wait; LOGINWAIT (3) - login wait; READY (4) - link is ready; LOSSSYNC (5) - link is loss sync; ERRORSTATE (6) - in error state; XXX (7) - xxx; NONPARTICIPATE (8) - link did not participate; COREDUMP (9) - taking coredump; OFFLINE (10) - link is offline; FWDEAD (11) - firmware is dead; IDLEFORRESET (12) - link is idle for reset; DHCPINPROGRESS (13) - DHCP is in progress; PENDINGRESET (14) - link reset is pending; NEW (15) - link in new. This value is applicable for only virtual ports; DISABLED (16) - link in disabled. This value is applicable for only virtual ports; DOWN (17) - link in down. This value is applicable for only virtual ports; FAILED (18) - link in failed. This value is applicable for only virtual ports; PURGING (19) - link in purging. This value is applicable for only virtual ports. |
DEPENDENT | hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",linkstate] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h |
HPE | Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Type | Port connection type: HOST (1) - FC port connected to hosts or fabric; DISK (2) - FC port connected to disks; FREE (3) - port is not connected to hosts or disks; IPORT (4) - port is in iport mode; RCFC (5) - FC port used for remote copy; PEER (6) - FC port used for data migration; RCIP (7) - IP (Ethernet) port used for remote copy; ISCSI (8) - iSCSI (Ethernet) port connected to hosts; CNA (9) - CNA port, which can be FCoE or iSCSI; FS (10) - Ethernet File Persona ports. |
DEPENDENT | hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Hardware type | Hardware type: FC (1) - Fibre channel HBA; ETH (2) - Ethernet NIC; iSCSI (3) - iSCSI HBA; CNA (4) - Converged network adapter; SAS (5) - SAS HBA; COMBO (6) - Combo card; NVME (7) - NVMe drive; UNKNOWN (99) - unknown hardware type. |
DEPENDENT | hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",hwtype] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h |
HPE | Task [{#NAME}]: Get task data | Task [{#NAME}] data |
DEPENDENT | hpe.primera.task["{#ID}",data] Preprocessing: - JSONPATH: |
HPE | Task [{#NAME}]: Finish time | Task finish time. |
DEPENDENT | hpe.primera.task["{#ID}",finishtime] Preprocessing: - JSONPATH: - DISCARD UNCHANGEDHEARTBEAT:6h - NOT MATCHESREGEX:^-$ ⛔️ON FAIL:DISCARD_VALUE -> - JAVASCRIPT: |
HPE | Task [{#NAME}]: Start time | Task start time. |
DEPENDENT | hpe.primera.task["{#ID}",starttime] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:6h - JAVASCRIPT: |
HPE | Task [{#NAME}]: Status | Task status: DONE (1) - task is finished; ACTIVE (2) - task is in progress; CANCELLED (3) - task is canceled; FAILED (4) - task failed. |
DEPENDENT | hpe.primera.task["{#ID}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Task [{#NAME}]: Type | Task type: VVCOPY (1) - track the physical copy operations; PHYSCOPYRESYNC (2) - track physical copy resynchronization operations; MOVEREGIONS (3) - track region move operations; PROMOTESV (4) - track virtual-copy promotions; REMOTECOPYSYNC (5) - track remote copy group synchronizations; REMOTECOPYREVERSE (6) - track the reversal of a remote copy group; REMOTECOPYFAILOVER (7) - track the change-over of a secondary volume group to a primaryvolume group;REMOTECOPYRECOVER (8) - track synchronization start after a failover operation from originalsecondary cluster to original primary cluster; REMOTECOPYRESTORE (9) - tracks the restoration process for groups that have already beenrecovered; COMPACTCPG (10) - track space consolidation in CPGs; COMPACTIDS (11) - track space consolidation in logical disks; SNAPSHOTACCOUNTING (12) - track progress of snapshot space usage accounting; CHECKVV (13) - track the progress of the check-volume operation; SCHEDULEDTASK (14) - track tasks that have been executed by the system scheduler; SYSTEMTASK (15) - track tasks that are periodically run by the storage system; BACKGROUNDTASK (16) - track commands started using the starttask command; IMPORTVV (17) - track tasks that migrate data to the local storage system; ONLINECOPY (18) - track physical copy of the volume while online (createvvcopy-online command); CONVERTVV (19) - track tasks that convert a volume from an FPVV to a TPVV, and the reverse; BACKGROUNDCOMMAND (20) - track background command tasks; CLXSYNC (21) - track CLX synchronization tasks; CLXRECOVERY (22) - track CLX recovery tasks; TUNESD (23) - tune copy space; TUNEVV (24) - tune virtual volume; TUNEVVROLLBACK (25) - tune virtual volume rollback; TUNEVVRESTART (26) - tune virtual volume restart; SYSTEMTUNING (27) - system tuning; NODERESCUE (28) - node rescue; REPAIRSYNC (29) - remote copy repair sync; REMOTECOPYSWOVER (30) - remote copy switchover; DEFRAGMENTATION (31) - defragmentation; ENCRYPTIONCHANGE (32) - encryption change; REMOTECOPYFAILSAFE (33) - remote copy failsafe; TUNETPVV (34) - tune thin virtual volume; REMOTECOPYCHGMODE (35) - remote copy change mode; ONLINEPROMOTE (37) - online promote snap; RELOCATEPD (38) - relocate PD; PERIODICCSS (39) - remote copy periodic CSS; TUNEVVLARGE (40) - tune large virtual volume; SDMETAFIXER (41) - compression SD meta fixer; DEDUPDRYRUN (42) - preview dedup ratio; COMPRDRYRUN (43) - compression estimation; DEDUPCOMPRDRYRUN (44) - compression and dedup estimation; UNKNOWN (99) - unknown task type. |
DEPENDENT | hpe.primera.task["{#ID}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Volume [{#NAME}]: Get volume data | Volume [{#NAME}] data |
DEPENDENT | hpe.primera.volume["{#ID}",data] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Administrative space: Free | Free administrative space. |
DEPENDENT | hpe.primera.volume.space.admin["{#ID}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Administrative space: Raw reserved | Raw reserved administrative space. |
DEPENDENT | hpe.primera.volume.space.admin["{#ID}",rawreserved] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:12h - MULTIPLIER: |
HPE | Volume [{#NAME}]: Administrative space: Reserved | Reserved administrative space. |
DEPENDENT | hpe.primera.volume.space.admin["{#ID}",reserved] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Administrative space: Used | Used administrative space. |
DEPENDENT | hpe.primera.volume.space.admin["{#ID}",used] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Compaction ratio | The compaction ratio indicates the overall amount of storage space saved with thin technology. |
DEPENDENT | hpe.primera.volume.capacity.efficiency["{#ID}",compaction] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Volume [{#NAME}]: Compression state | Volume compression state: YES (1) - compression is enabled on the volume; NO (2) - compression is disabled on the volume; OFF (3) - compression is turned off; NA (4) - compression is not available on the volume. |
DEPENDENT | hpe.primera.volume.state["{#ID}",compression] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Volume [{#NAME}]: Deduplication state | Volume deduplication state: YES (1) - enables deduplication on the volume; NO (2) - disables deduplication on the volume; NA (3) - deduplication is not available; OFF (4) - deduplication is turned off. |
DEPENDENT | hpe.primera.volume.state["{#ID}",deduplication] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Volume [{#NAME}]: Degraded state | Volume detailed state: LDSNOTSTARTED (1) - LDs not started. NOTSTARTED (2) - VV not started. NEEDSCHECK (3) - check for consistency. NEEDSMAINTCHECK (4) - maintenance check is required. INTERNALCONSISTENCYERROR (5) - internal consistency error. SNAPDATAINVALID (6) - invalid snapshot data. PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data. STALE (8) - parts of the VV contain old data because of a copy-on-write operation. COPYFAILED (9) - a promote or copy operation to this volume failed. DEGRADEDAVAIL (10) - degraded due to availability. DEGRADEDPERF (11) - degraded due to performance. PROMOTING (12) - volume is the current target of a promote operation. COPYTARGET (13) - volume is the current target of a physical copy operation. RESYNCTARGET (14) - volume is the current target of a resynchronized copy operation. TUNING (15) - volume tuning is in progress. CLOSING (16) - volume is closing. REMOVING (17) - removing the volume. REMOVINGRETRY (18) - retrying a volume removal operation. CREATING (19) - creating a volume. COPYSOURCE (20) - copy source. IMPORTING (21) - importing a volume. CONVERTING (22) - converting a volume. INVALID (23) - invalid. EXCLUSIVE (24) -lLocal storage system has exclusive access to the volume. CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set. STANDBY (26) - volume in standby mode. SDMETAINCONSISTENT (27) - SD Meta Inconsistent. SDNEEDSFIX (28) - SD needs fix. SDMETAFIXING (29) - SD meta fix. UNKNOWN (999) - unknown state. NOTSUPPORTEDBY_WSAPI (1000) - state not supported by WSAPI. |
DEPENDENT | hpe.primera.volume.state["{#ID}",degraded] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Failed state | Volume detailed state: LDSNOTSTARTED (1) - LDs not started. NOTSTARTED (2) - VV not started. NEEDSCHECK (3) - check for consistency. NEEDSMAINTCHECK (4) - maintenance check is required. INTERNALCONSISTENCYERROR (5) - internal consistency error. SNAPDATAINVALID (6) - invalid snapshot data. PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data. STALE (8) - parts of the VV contain old data because of a copy-on-write operation. COPYFAILED (9) - a promote or copy operation to this volume failed. DEGRADEDAVAIL (10) - degraded due to availability. DEGRADEDPERF (11) - degraded due to performance. PROMOTING (12) - volume is the current target of a promote operation. COPYTARGET (13) - volume is the current target of a physical copy operation. RESYNCTARGET (14) - volume is the current target of a resynchronized copy operation. TUNING (15) - volume tuning is in progress. CLOSING (16) - volume is closing. REMOVING (17) - removing the volume. REMOVINGRETRY (18) - retrying a volume removal operation. CREATING (19) - creating a volume. COPYSOURCE (20) - copy source. IMPORTING (21) - importing a volume. CONVERTING (22) - converting a volume. INVALID (23) - invalid. EXCLUSIVE (24) - local storage system has exclusive access to the volume. CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set. STANDBY (26) - volume in standby mode. SDMETAINCONSISTENT (27) - SD Meta Inconsistent. SDNEEDSFIX (28) - SD needs fix. SDMETAFIXING (29) - SD meta fix. UNKNOWN (999) - unknown state. NOTSUPPORTEDBY_WSAPI (1000) - state not supported by WSAPI. |
DEPENDENT | hpe.primera.volume.state["{#ID}",failed] Preprocessing: - JSONPATH: - JAVASCRIPT: |
HPE | Volume [{#NAME}]: Overprovisioning ratio | Overprovisioning capacity efficiency ratio. |
DEPENDENT | hpe.primera.volume.capacity.efficiency["{#ID}",overprovisioning] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Volume [{#NAME}]: Remote copy status | Remote copy status of the volume: NONE (1) - volume is not associated with remote copy; PRIMARY (2) - volume is the primary copy; SECONDARY (3) - volume is the secondary copy; SNAP (4) - volume is the remote copy snapshot; SYNC (5) - volume is a remote copy snapshot being used for synchronization; DELETE (6) - volume is a remote copy snapshot that is marked for deletion; UNKNOWN (99) - remote copy status is unknown for this volume. |
DEPENDENT | hpe.primera.volume.status["{#ID}",rcopy] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Snapshot space: Free | Free snapshot space. |
DEPENDENT | hpe.primera.volume.space.snapshot["{#ID}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Snapshot space: Raw reserved | Raw reserved snapshot space. |
DEPENDENT | hpe.primera.volume.space.snapshot["{#ID}",rawreserved] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:12h - MULTIPLIER: |
HPE | Volume [{#NAME}]: Snapshot space: Reserved | Reserved snapshot space. |
DEPENDENT | hpe.primera.volume.space.snapshot["{#ID}",reserved] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Snapshot space: Used | Used snapshot space. |
DEPENDENT | hpe.primera.volume.space.snapshot["{#ID}",used] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: State | State of the volume: NORMAL (1) - normal operation; DEGRADED (2) - degraded state; FAILED (3) - abnormal operation; UNKNOWN (99) - unknown state. |
DEPENDENT | hpe.primera.volume.state["{#ID}"] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Storage space saved using compression | Indicates the amount of storage space saved using compression. |
DEPENDENT | hpe.primera.volume.capacity.efficiency["{#ID}",compression] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Volume [{#NAME}]: Storage space saved using deduplication | Indicates the amount of storage space saved using deduplication. |
DEPENDENT | hpe.primera.volume.capacity.efficiency["{#ID}",deduplication] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Volume [{#NAME}]: Storage space saved using deduplication and compression | Indicates the amount of storage space saved using deduplication and compression together. |
DEPENDENT | hpe.primera.volume.capacity.efficiency["{#ID}",reduction] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Volume [{#NAME}]: Total reserved space | Total reserved space. |
DEPENDENT | hpe.primera.volume.space.total["{#ID}",reserved] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Total space | Virtual size of volume. |
DEPENDENT | hpe.primera.volume.space.total["{#ID}",size] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Total used space | Total used space. Sum of used user space and used snapshot space. |
DEPENDENT | hpe.primera.volume.space.total["{#ID}",used] Preprocessing: - JSONPATH: ⛔️ON_FAIL: - MULTIPLIER: |
HPE | Volume [{#NAME}]: User space: Free | Free user space. |
DEPENDENT | hpe.primera.volume.space.user["{#ID}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: User space: Raw reserved | Raw reserved user space. |
DEPENDENT | hpe.primera.volume.space.user["{#ID}",rawreserved] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:12h - MULTIPLIER: |
HPE | Volume [{#NAME}]: User space: Reserved | Reserved user space. |
DEPENDENT | hpe.primera.volume.space.user["{#ID}",reserved] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: User space: Used | Used user space. |
DEPENDENT | hpe.primera.volume.space.user["{#ID}",used] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
Name | Description | Expression | Severity | Dependencies and additional info |
---|---|---|---|---|
HPE Primera: There are errors in requests to WSAPI | Zabbix has received errors in requests to WSAPI. |
length(last(/HPE Primera by HTTP/hpe.primera.get.errors))>0 |
AVERAGE | Depends on: - HPE Primera: Service is unavailable |
HPE Primera: Service is unavailable | - |
max(/HPE Primera by HTTP/net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"],5m)=0 |
HIGH | Manual close: YES |
CPG [{#NAME}]: Degraded | CPG [{#NAME}] is in degraded state. |
last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=2 |
AVERAGE | |
CPG [{#NAME}]: Failed | CPG [{#NAME}] is in failed state. |
last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=3 |
HIGH | |
Disk [{#POSITION}]: Path A0 degraded | Disk [{#POSITION}] path A0 in degraded state. |
last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a0_degraded])=1 |
AVERAGE | |
Disk [{#POSITION}]: Path A1 degraded | Disk [{#POSITION}] path A1 in degraded state. |
last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a1_degraded])=1 |
AVERAGE | |
Disk [{#POSITION}]: Path B0 degraded | Disk [{#POSITION}] path B0 in degraded state. |
last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b0_degraded])=1 |
AVERAGE | |
Disk [{#POSITION}]: Path B1 degraded | Disk [{#POSITION}] path B1 in degraded state. |
last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b1_degraded])=1 |
AVERAGE | |
Disk [{#POSITION}]: Degraded | Disk [{#POSITION}] in degraded state. |
last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=2 |
AVERAGE | |
Disk [{#POSITION}]: Failed | Disk [{#POSITION}] in failed state. |
last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=3 |
HIGH | |
Disk [{#POSITION}]: Unknown issue | Disk [{#POSITION}] in unknown state. |
last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=99 |
INFO | |
Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state is {ITEM.VALUE1} | Port [{#NODE}:{#SLOT}:{#CARD.PORT}] has failover error. |
last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>4 |
AVERAGE | |
Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1} | Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state. |
last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>4 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>3 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>13 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>15 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>16 |
HIGH | |
Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1} | Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state. |
last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=1 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=3 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=13 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=15 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=16 |
AVERAGE | |
Task [{#NAME}]: Cancelled | Task [{#NAME}] is cancelled. |
last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=3 |
INFO | |
Task [{#NAME}]: Failed | Task [{#NAME}] is failed. |
last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=4 |
AVERAGE | |
Volume [{#NAME}]: Degraded | Volume [{#NAME}] is in degraded state. |
last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=2 |
AVERAGE | |
Volume [{#NAME}]: Failed | Volume [{#NAME}] is in failed state. |
last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=3 |
HIGH |
Please report any issues with the template at https://support.zabbix.com
You can also provide feedback, discuss the template or ask for help with it at ZABBIX forums.
For Zabbix version: 6.2 and higher
The template to monitor HPE MSA 2060 by HTTP.
It works without any external scripts and uses the script item.
This template was tested on:
See Zabbix template operation for basic instructions.
No specific Zabbix configuration is required.
Name | Description | Default |
---|---|---|
{$HPE.MSA.API.PASSWORD} | Specify password for API. |
`` |
{$HPE.MSA.API.PORT} | Connection port for API. |
443 |
{$HPE.MSA.API.SCHEME} | Connection scheme for API. |
https |
{$HPE.MSA.API.USERNAME} | Specify user name for API. |
zabbix |
{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} | The critical threshold of the CPU utilization in %. |
90 |
{$HPE.MSA.DATA.TIMEOUT} | Response timeout for API. |
30s |
{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} | The critical threshold of the disk group space utilization in %. |
90 |
{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} | The warning threshold of the disk group space utilization in %. |
80 |
{$HPE.MSA.POOL.PUSED.MAX.CRIT} | The critical threshold of the pool space utilization in %. |
90 |
{$HPE.MSA.POOL.PUSED.MAX.WARN} | The warning threshold of the pool space utilization in %. |
80 |
There are no template links in this template.
Name | Description | Type | Key and additional info | ||
---|---|---|---|---|---|
Controllers discovery | Discover controllers. |
DEPENDENT | hpe.msa.controllers.discovery | ||
Disk groups discovery | Discover disk groups. |
DEPENDENT | hpe.msa.disks.groups.discovery | ||
Disks discovery | Discover disks. |
DEPENDENT | hpe.msa.disks.discovery Overrides: SSD life left |
||
Enclosures discovery | Discover enclosures. |
DEPENDENT | hpe.msa.enclosures.discovery | ||
Fans discovery | Discover fans. |
DEPENDENT | hpe.msa.fans.discovery | ||
FRU discovery | Discover FRU. |
DEPENDENT | hpe.msa.frus.discovery Filter: - {#TYPE} NOTMATCHESREGEX `^(POWER_SUPPLY |
RAID_IOM | CHASSIS_MIDPLANE)$` |
Pools discovery | Discover pools. |
DEPENDENT | hpe.msa.pools.discovery | ||
Ports discovery | Discover ports. |
DEPENDENT | hpe.msa.ports.discovery | ||
Power supplies discovery | Discover power supplies. |
DEPENDENT | hpe.msa.power_supplies.discovery | ||
Volumes discovery | Discover volumes. |
DEPENDENT | hpe.msa.volumes.discovery |
Group | Name | Description | Type | Key and additional info |
---|---|---|---|---|
HPE | Get system | The system data. |
DEPENDENT | hpe.msa.get.system Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get FRU | FRU data. |
DEPENDENT | hpe.msa.get.fru Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get fans | Fans data. |
DEPENDENT | hpe.msa.get.fans Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get disks | Disks data. |
DEPENDENT | hpe.msa.get.disks Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get enclosures | Enclosures data. |
DEPENDENT | hpe.msa.get.enclosures Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get ports | Ports data. |
DEPENDENT | hpe.msa.get.ports Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get power supplies | Power supplies data. |
DEPENDENT | hpe.msa.get.powersupplies Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> |
HPE | Get pools | Pools data. |
DEPENDENT | hpe.msa.get.pools Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get controllers | Controllers data. |
DEPENDENT | hpe.msa.get.controllers Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get controller statistics | Controllers statistics data. |
DEPENDENT | hpe.msa.get.controllerstatistics Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> |
HPE | Get disk groups | Disk groups data. |
DEPENDENT | hpe.msa.get.disks.groups Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get disk group statistics | Disk groups statistics data. |
DEPENDENT | hpe.msa.disks.get.groups.statistics Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get volumes | Volumes data. |
DEPENDENT | hpe.msa.get.volumes Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get volume statistics | Volumes statistics data. |
DEPENDENT | hpe.msa.get.volumes.statistics Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get method errors | A list of method errors from API requests. |
DEPENDENT | hpe.msa.get.errors Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Product ID | The product model identifier. |
DEPENDENT | hpe.msa.system.productid Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | System contact | The name of the person who administers the system. |
DEPENDENT | hpe.msa.system.contact Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | System information | A brief description of what the system is used for or how it is configured. |
DEPENDENT | hpe.msa.system.info Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | System location | The location of the system. |
DEPENDENT | hpe.msa.system.location Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | System name | The name of the storage system. |
DEPENDENT | hpe.msa.system.name Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Vendor name | The vendor name. |
DEPENDENT | hpe.msa.system.vendorname Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | System health | System health status. |
DEPENDENT | hpe.msa.system.health Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE MSA: Service ping | Check if HTTP/HTTPS service accepts TCP connections. |
SIMPLE | net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Get data | The discovered controller data. |
DEPENDENT | hpe.msa.get.controllers["{#CONTROLLER.ID}",data] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Get statistics data | The discovered controller statistics data. |
DEPENDENT | hpe.msa.get.controller_statistics["{#CONTROLLER.ID}",data] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Firmware version | Storage controller firmware version. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",firmware] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Part number | Part number of the controller. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",partnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: Serial number | Storage controller serial number. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",serialnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: Health | Controller health status. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Status | Storage controller status. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Disks | Number of disks in the storage system. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",disks] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Pools | Number of pools in the storage system. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",pools] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Disk groups | Number of disk groups in the storage system. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",diskgroups] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: IP address | Controller network port IP address. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",ipaddress] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: Cache memory size | Controller cache memory size. |
DEPENDENT | hpe.msa.controllers.cache["{#CONTROLLER.ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Write utilization | Percentage of write cache in use, from 0 to 100. |
DEPENDENT | hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate | For the controller that owns the volume, the number of times the block to be read is found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate | For the controller that owns the volume, the number of times the block to be read is not found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate | For the controller that owns the volume, the number of times the block written to is found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate | For the controller that owns the volume, the number of times the block written to is not found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: CPU utilization | Percentage of time the CPU is busy, from 0 to 100. |
DEPENDENT | hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: IOPS, total rate | Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: IOPS, read rate | Number of read operations per second. |
DEPENDENT | hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: IOPS, write rate | Number of write operations per second. |
DEPENDENT | hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Data transfer rate: Total | The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads | The data read rate, in bytes per second. |
DEPENDENT | hpe.msa.controllers.datatransfer.reads["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes | The data write rate, in bytes per second. |
DEPENDENT | hpe.msa.controllers.datatransfer.writes["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Controller [{#CONTROLLER.ID}]: Uptime | Number of seconds since the controller was restarted. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",uptime] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Get data | The discovered disk group data. |
DEPENDENT | hpe.msa.get.disks.groups["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Get statistics data | The discovered disk group statistics data. |
DEPENDENT | hpe.msa.get.disks.groups.statistics["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Disks count | Number of disks in the disk group. |
DEPENDENT | hpe.msa.disks.groups["{#NAME}",diskcount] Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: Pool space used | The percentage of pool capacity that the disk group occupies. |
DEPENDENT | hpe.msa.disks.groups.space["{#NAME}",poolutil] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h |
HPE | Disk group [{#NAME}]: Health | Disk group health. |
DEPENDENT | hpe.msa.disks.groups["{#NAME}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk group [{#NAME}]: Blocks size | The size of a block, in bytes. |
DEPENDENT | hpe.msa.disks.groups.blocks["{#NAME}",size] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: Blocks free | Free space in blocks. |
DEPENDENT | hpe.msa.disks.groups.blocks["{#NAME}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: Blocks total | Total space in blocks. |
DEPENDENT | hpe.msa.disks.groups.blocks["{#NAME}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: Space free | The free space in the disk group. |
CALCULATED | hpe.msa.disks.groups.space["{#NAME}",free] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",free]) |
HPE | Disk group [{#NAME}]: Space total | The capacity of the disk group. |
CALCULATED | hpe.msa.disks.groups.space["{#NAME}",total] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",total]) |
HPE | Disk group [{#NAME}]: Space utilization | The space utilization percentage in the disk group. |
CALCULATED | hpe.msa.disks.groups.space["{#NAME}",util] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: 100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100 |
HPE | Disk group [{#NAME}]: RAID type | The RAID level of the disk group. |
DEPENDENT | hpe.msa.disks.groups.raid["{#NAME}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: Status | The status of the disk group: - CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down. - DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged. - FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down. - FTOL: Fault tolerant. - MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing. - OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost. - QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. - QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. - QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group. - QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups. - STOP: The disk group is stopped. - UNKN: Unknown. - UP: Up. The disk group is online and does not have fault-tolerant attributes. |
DEPENDENT | hpe.msa.disks.groups["{#NAME}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: IOPS, total rate | Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.disks.groups.iops.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Average response time: Total | Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset. |
DEPENDENT | hpe.msa.disks.groups.avgrsptime["{#NAME}",total] Preprocessing: - JSONPATH: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: Average response time: Read | Average response time for all read operations, calculated over the interval since these statistics were last requested or reset. |
DEPENDENT | hpe.msa.disks.groups.avgrsptime["{#NAME}",read] Preprocessing: - JSONPATH: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: Average response time: Write | Average response time for all write operations, calculated over the interval since these statistics were last requested or reset. |
DEPENDENT | hpe.msa.disks.groups.avgrsptime["{#NAME}",write] Preprocessing: - JSONPATH: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: IOPS, read rate | Number of read operations per second. |
DEPENDENT | hpe.msa.disks.groups.iops.read["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Disk group [{#NAME}]: IOPS, write rate | Number of write operations per second. |
DEPENDENT | hpe.msa.disks.groups.iops.write["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Disk group [{#NAME}]: Data transfer rate: Total | The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Data transfer rate: Reads | The data read rate, in bytes per second. |
DEPENDENT | hpe.msa.disks.groups.datatransfer.reads["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Disk group [{#NAME}]: Data transfer rate: Writes | The data write rate, in bytes per second. |
DEPENDENT | hpe.msa.disks.groups.datatransfer.writes["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Pool [{#NAME}]: Get data | The discovered pool data. |
DEPENDENT | hpe.msa.get.pools["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Pool [{#NAME}]: Health | Pool health. |
DEPENDENT | hpe.msa.pools["{#NAME}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Pool [{#NAME}]: Blocks size | The size of a block, in bytes. |
DEPENDENT | hpe.msa.pools.blocks["{#NAME}",size] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Pool [{#NAME}]: Blocks available | Available space in blocks. |
DEPENDENT | hpe.msa.pools.blocks["{#NAME}",available] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Pool [{#NAME}]: Blocks total | Total space in blocks. |
DEPENDENT | hpe.msa.pools.blocks["{#NAME}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Pool [{#NAME}]: Space free | The free space in the pool. |
CALCULATED | hpe.msa.pools.space["{#NAME}",free] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",available]) |
HPE | Pool [{#NAME}]: Space total | The capacity of the pool. |
CALCULATED | hpe.msa.pools.space["{#NAME}",total] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",total]) |
HPE | Pool [{#NAME}]: Space utilization | The space utilization percentage in the pool. |
CALCULATED | hpe.msa.pools.space["{#NAME}",util] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: 100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100 |
HPE | Volume [{#NAME}]: Get data | The discovered volume data. |
DEPENDENT | hpe.msa.get.volumes["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Get statistics data | The discovered volume statistics data. |
DEPENDENT | hpe.msa.get.volumes.statistics["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Blocks size | The size of a block, in bytes. |
DEPENDENT | hpe.msa.volumes.blocks["{#NAME}",size] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Volume [{#NAME}]: Blocks allocated | The amount of blocks currently allocated to the volume. |
DEPENDENT | hpe.msa.volumes.blocks["{#NAME}",allocated] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Volume [{#NAME}]: Blocks total | Total space in blocks. |
DEPENDENT | hpe.msa.volumes.blocks["{#NAME}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Volume [{#NAME}]: Space allocated | The amount of space currently allocated to the volume. |
CALCULATED | hpe.msa.volumes.space["{#NAME}",allocated] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",allocated]) |
HPE | Volume [{#NAME}]: Space total | The capacity of the volume. |
CALCULATED | hpe.msa.volumes.space["{#NAME}",total] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",total]) |
HPE | Volume [{#NAME}]: IOPS, total rate | Total input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.volumes.iops.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: IOPS, read rate | Number of read operations per second. |
DEPENDENT | hpe.msa.volumes.iops.read["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: IOPS, write rate | Number of write operations per second. |
DEPENDENT | hpe.msa.volumes.iops.write["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Data transfer rate: Total | The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.volumes.data_transfer.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Data transfer rate: Reads | The data read rate, in bytes per second. |
DEPENDENT | hpe.msa.volumes.datatransfer.reads["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Volume [{#NAME}]: Data transfer rate: Writes | The data write rate, in bytes per second. |
DEPENDENT | hpe.msa.volumes.datatransfer.writes["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Volume [{#NAME}]: Cache: Read hits, rate | For the controller that owns the volume, the number of times the block to be read is found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.read.hits["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Cache: Read misses, rate | For the controller that owns the volume, the number of times the block to be read is not found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.read.misses["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Cache: Write hits, rate | For the controller that owns the volume, the number of times the block written to is found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.write.hits["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Cache: Write misses, rate | For the controller that owns the volume, the number of times the block written to is not found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.write.misses["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Enclosure [{#DURABLE.ID}]: Get data | The discovered enclosure data. |
DEPENDENT | hpe.msa.get.enclosures["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Enclosure [{#DURABLE.ID}]: Health | Enclosure health. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Status | Enclosure status. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",status] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Midplane serial number | Midplane serial number. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",midplaneserialnumber] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Part number | Enclosure part number. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",partnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Enclosure [{#DURABLE.ID}]: Model | Enclosure model. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",model] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Power | Enclosure power in watts. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",power] Preprocessing: - JSONPATH: |
HPE | Power supply [{#DURABLE.ID}]: Get data | The discovered power supply data. |
DEPENDENT | hpe.msa.get.power_supplies["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Power supply [{#DURABLE.ID}]: Health | Power supply health status. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ON FAIL:CUSTOM_VALUE -> 4 - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Power supply [{#DURABLE.ID}]: Status | Power supply status. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",status] Preprocessing: - JSONPATH: ⛔️ON FAIL:CUSTOM_VALUE -> 4 - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Power supply [{#DURABLE.ID}]: Part number | Power supply part number. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",partnumber] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Power supply [{#DURABLE.ID}]: Serial number | Power supply serial number. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",serialnumber] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Port [{#NAME}]: Get data | The discovered port data. |
DEPENDENT | hpe.msa.get.ports["{#NAME}",,data] Preprocessing: - JSONPATH: |
HPE | Port [{#NAME}]: Health | Port health status. |
DEPENDENT | hpe.msa.ports["{#NAME}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Port [{#NAME}]: Status | Port status. |
DEPENDENT | hpe.msa.ports["{#NAME}",status] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Port [{#NAME}]: Type | Port type. |
DEPENDENT | hpe.msa.ports["{#NAME}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Fan [{#DURABLE.ID}]: Get data | The discovered fan data. |
DEPENDENT | hpe.msa.get.fans["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Fan [{#DURABLE.ID}]: Health | Fan health status. |
DEPENDENT | hpe.msa.fans["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Fan [{#DURABLE.ID}]: Status | Fan status. |
DEPENDENT | hpe.msa.fans["{#DURABLE.ID}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Fan [{#DURABLE.ID}]: Speed | Fan speed (revolutions per minute). |
DEPENDENT | hpe.msa.fans["{#DURABLE.ID}",speed] Preprocessing: - JSONPATH: |
HPE | Disk [{#DURABLE.ID}]: Get data | The discovered disk data. |
DEPENDENT | hpe.msa.get.disks["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Disk [{#DURABLE.ID}]: Health | Disk health status. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Temperature status | Disk temperature status. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",temperaturestatus] Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> - INRANGE: ⛔️ONFAIL: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Temperature | Temperature of the disk. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",temperature] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Type | Disk type: SAS: Enterprise SAS spinning disk. SAS MDL: Midline SAS spinning disk. SSD SAS: SAS solit-state disk. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Disk group | If the disk is in a disk group, the disk group name. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",group] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Storage pool | If the disk is in a pool, the pool name. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",pool] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Vendor | Disk vendor. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",vendor] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Model | Disk model. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",model] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Serial number | Disk serial number. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",serialnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Disk [{#DURABLE.ID}]: Blocks size | The size of a block, in bytes. |
DEPENDENT | hpe.msa.disks.blocks["{#DURABLE.ID}",size] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Blocks total | Total space in blocks. |
DEPENDENT | hpe.msa.disks.blocks["{#DURABLE.ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Space total | Total size of the disk. |
CALCULATED | hpe.msa.disks.space["{#DURABLE.ID}",total] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: last(//hpe.msa.disks.blocks["{#DURABLE.ID}",size])*last(//hpe.msa.disks.blocks["{#DURABLE.ID}",total]) |
HPE | Disk [{#DURABLE.ID}]: SSD life left | The percentage of disk life remaining. |
DEPENDENT | hpe.msa.disks.ssd["{#DURABLE.ID}",lifeleft] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Get data | The discovered FRU data. |
DEPENDENT | hpe.msa.get.frus["{#ENCLOSURE.ID}:{#LOCATION}",data] Preprocessing: - JSONPATH: |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Status | {#DESCRIPTION}. FRU status: Absent: The FRU is not present. Fault: The FRU's health is Degraded or Fault. Invalid data: The FRU ID data is invalid. The FRU's EEPROM is improperly programmed. OK: The FRU is operating normally. Power off: The FRU is powered off. |
DEPENDENT | hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Part number | {#DESCRIPTION}. Part number of the FRU. |
DEPENDENT | hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",partnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Serial number | {#DESCRIPTION}. FRU serial number. |
DEPENDENT | hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",serialnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
Zabbix raw items | HPE MSA: Get data | The JSON with result of API requests. |
SCRIPT | hpe.msa.get.data Expression: The text is too long. Please see the template. |
Name | Description | Expression | Severity | Dependencies and additional info |
---|---|---|---|---|
There are errors in method requests to API | There are errors in method requests to API. |
length(last(/HPE MSA 2060 Storage by HTTP/hpe.msa.get.errors))>0 |
AVERAGE | Depends on: - Service is down or unavailable |
System health is in degraded state | System health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=1 |
WARNING | |
System health is in fault state | System health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=2 |
AVERAGE | |
System health is in unknown state | System health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=3 |
INFO | |
Service is down or unavailable | HTTP/HTTPS service is down or unable to establish TCP connection. |
max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0 |
HIGH | |
Controller [{#CONTROLLER.ID}]: Controller health is in degraded state | Controller health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1 |
WARNING | Depends on: - Controller [{#CONTROLLER.ID}]: Controller is down |
Controller [{#CONTROLLER.ID}]: Controller health is in fault state | Controller health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2 |
AVERAGE | Depends on: - Controller [{#CONTROLLER.ID}]: Controller is down |
Controller [{#CONTROLLER.ID}]: Controller health is in unknown state | Controller health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3 |
INFO | Depends on: - Controller [{#CONTROLLER.ID}]: Controller is down |
Controller [{#CONTROLLER.ID}]: Controller is down | The controller is down. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1 |
HIGH | |
Controller [{#CONTROLLER.ID}]: High CPU utilization | Controller CPU utilization is too high. The system might be slow to respond. |
min(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |
WARNING | |
Controller [{#CONTROLLER.ID}]: Controller has been restarted | The controller uptime is less than 10 minutes. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m |
WARNING | |
Disk group [{#NAME}]: Disk group health is in degraded state | Disk group health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1 |
WARNING | |
Disk group [{#NAME}]: Disk group health is in fault state | Disk group health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2 |
AVERAGE | |
Disk group [{#NAME}]: Disk group health is in unknown state | Disk group health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3 |
INFO | |
Disk group [{#NAME}]: Disk group space is low | Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available). |
min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"} |
WARNING | Depends on: - Disk group [{#NAME}]: Disk group space is critically low |
Disk group [{#NAME}]: Disk group space is critically low | Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available). |
min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"} |
AVERAGE | |
Disk group [{#NAME}]: Disk group is fault tolerant with a down disk | The disk group is online and fault tolerant, but some of it's disks are down. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1 |
AVERAGE | |
Disk group [{#NAME}]: Disk group has damaged disks | The disk group is online and fault tolerant, but some of it's disks are damaged. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9 |
AVERAGE | |
Disk group [{#NAME}]: Disk group has missing disks | The disk group is online and fault tolerant, but some of it's disks are missing. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is offline | Either the disk group is using offline initialization, or it's disks are down and data may be lost. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=3 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined critical | The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=4 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined offline | The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined unsupported | The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined with an inaccessible disk | The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=6 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is stopped | The disk group is stopped. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=7 |
AVERAGE | |
Disk group [{#NAME}]: Disk group status is critical | The disk group is online but isn't fault tolerant because some of its disks are down. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=2 |
AVERAGE | |
Pool [{#NAME}]: Pool health is in degraded state | Pool health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1 |
WARNING | |
Pool [{#NAME}]: Pool health is in fault state | Pool health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2 |
AVERAGE | |
Pool [{#NAME}]: Pool health is in unknown state | Pool [{#NAME}] health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3 |
INFO | |
Pool [{#NAME}]: Pool space is low | Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available). |
min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"} |
WARNING | Depends on: - Pool [{#NAME}]: Pool space is critically low |
Pool [{#NAME}]: Pool space is critically low | Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available). |
min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"} |
AVERAGE | |
Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state | Enclosure health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1 |
WARNING | |
Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state | Enclosure health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state | Enclosure health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3 |
INFO | |
Enclosure [{#DURABLE.ID}]: Enclosure has critical status | Enclosure has critical status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=2 |
HIGH | |
Enclosure [{#DURABLE.ID}]: Enclosure has warning status | Enclosure has warning status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3 |
WARNING | |
Enclosure [{#DURABLE.ID}]: Enclosure is unavailable | Enclosure is unavailable. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7 |
HIGH | |
Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable | Enclosure is unrecoverable. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4 |
HIGH | |
Enclosure [{#DURABLE.ID}]: Enclosure has unknown status | Enclosure has unknown status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6 |
INFO | |
Power supply [{#DURABLE.ID}]: Power supply health is in degraded state | Power supply health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1 |
WARNING | |
Power supply [{#DURABLE.ID}]: Power supply health is in fault state | Power supply health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Power supply [{#DURABLE.ID}]: Power supply health is in unknown state | Power supply health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3 |
INFO | |
Power supply [{#DURABLE.ID}]: Power supply has error status | Power supply has error status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2 |
AVERAGE | |
Power supply [{#DURABLE.ID}]: Power supply has warning status | Power supply has warning status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1 |
WARNING | |
Power supply [{#DURABLE.ID}]: Power supply has unknown status | Power supply has unknown status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4 |
INFO | |
Port [{#NAME}]: Port health is in degraded state | Port health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1 |
WARNING | |
Port [{#NAME}]: Port health is in fault state | Port health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2 |
AVERAGE | |
Port [{#NAME}]: Port health is in unknown state | Port health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3 |
INFO | |
Port [{#NAME}]: Port has error status | Port has error status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2 |
AVERAGE | |
Port [{#NAME}]: Port has warning status | Port has warning status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1 |
WARNING | |
Port [{#NAME}]: Port has unknown status | Port has unknown status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4 |
INFO | |
Fan [{#DURABLE.ID}]: Fan health is in degraded state | Fan health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1 |
WARNING | |
Fan [{#DURABLE.ID}]: Fan health is in fault state | Fan health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Fan [{#DURABLE.ID}]: Fan health is in unknown state | Fan health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3 |
INFO | |
Fan [{#DURABLE.ID}]: Fan has error status | Fan has error status. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1 |
AVERAGE | |
Fan [{#DURABLE.ID}]: Fan is missing | Fan is missing. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3 |
INFO | |
Fan [{#DURABLE.ID}]: Fan is off | Fan is off. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2 |
WARNING | |
Disk [{#DURABLE.ID}]: Disk health is in degraded state | Disk health is in degraded state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1 |
WARNING | |
Disk [{#DURABLE.ID}]: Disk health is in fault state | Disk health is in fault state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Disk [{#DURABLE.ID}]: Disk health is in unknown state | Disk health is in unknown state. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3 |
INFO | |
Disk [{#DURABLE.ID}]: Disk temperature is high | Disk temperature is high. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3 |
WARNING | |
Disk [{#DURABLE.ID}]: Disk temperature is critically high | Disk temperature is critically high. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2 |
AVERAGE | |
Disk [{#DURABLE.ID}]: Disk temperature is unknown | Disk temperature is unknown. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4 |
INFO | |
FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU status is Degraded or Fault | FRU status is Degraded or Fault. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=1 |
AVERAGE | |
FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU ID data is invalid | The FRU ID data is invalid. The FRU's EEPROM is improperly programmed. |
last(/HPE MSA 2060 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=0 |
WARNING |
Please report any issues with the template at https://support.zabbix.com
You can also provide feedback, discuss the template or ask for help with it at ZABBIX forums.
For Zabbix version: 6.2 and higher
The template to monitor HPE MSA 2040 by HTTP.
It works without any external scripts and uses the script item.
This template was tested on:
See Zabbix template operation for basic instructions.
No specific Zabbix configuration is required.
Name | Description | Default |
---|---|---|
{$HPE.MSA.API.PASSWORD} | Specify password for API. |
`` |
{$HPE.MSA.API.PORT} | Connection port for API. |
443 |
{$HPE.MSA.API.SCHEME} | Connection scheme for API. |
https |
{$HPE.MSA.API.USERNAME} | Specify user name for API. |
zabbix |
{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} | The critical threshold of the CPU utilization in %. |
90 |
{$HPE.MSA.DATA.TIMEOUT} | Response timeout for API. |
30s |
{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} | The critical threshold of the disk group space utilization in %. |
90 |
{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} | The warning threshold of the disk group space utilization in %. |
80 |
{$HPE.MSA.POOL.PUSED.MAX.CRIT} | The critical threshold of the pool space utilization in %. |
90 |
{$HPE.MSA.POOL.PUSED.MAX.WARN} | The warning threshold of the pool space utilization in %. |
80 |
There are no template links in this template.
Name | Description | Type | Key and additional info | ||
---|---|---|---|---|---|
Controllers discovery | Discover controllers. |
DEPENDENT | hpe.msa.controllers.discovery | ||
Disk groups discovery | Discover disk groups. |
DEPENDENT | hpe.msa.disks.groups.discovery | ||
Disks discovery | Discover disks. |
DEPENDENT | hpe.msa.disks.discovery Overrides: SSD life left |
||
Enclosures discovery | Discover enclosures. |
DEPENDENT | hpe.msa.enclosures.discovery | ||
Fans discovery | Discover fans. |
DEPENDENT | hpe.msa.fans.discovery | ||
FRU discovery | Discover FRU. |
DEPENDENT | hpe.msa.frus.discovery Filter: - {#TYPE} NOTMATCHESREGEX `^(POWER_SUPPLY |
RAID_IOM | CHASSIS_MIDPLANE)$` |
Pools discovery | Discover pools. |
DEPENDENT | hpe.msa.pools.discovery | ||
Ports discovery | Discover ports. |
DEPENDENT | hpe.msa.ports.discovery | ||
Power supplies discovery | Discover power supplies. |
DEPENDENT | hpe.msa.power_supplies.discovery | ||
Volumes discovery | Discover volumes. |
DEPENDENT | hpe.msa.volumes.discovery |
Group | Name | Description | Type | Key and additional info |
---|---|---|---|---|
HPE | Get system | The system data. |
DEPENDENT | hpe.msa.get.system Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get FRU | FRU data. |
DEPENDENT | hpe.msa.get.fru Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get fans | Fans data. |
DEPENDENT | hpe.msa.get.fans Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get disks | Disks data. |
DEPENDENT | hpe.msa.get.disks Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get enclosures | Enclosures data. |
DEPENDENT | hpe.msa.get.enclosures Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get ports | Ports data. |
DEPENDENT | hpe.msa.get.ports Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get power supplies | Power supplies data. |
DEPENDENT | hpe.msa.get.powersupplies Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> |
HPE | Get pools | Pools data. |
DEPENDENT | hpe.msa.get.pools Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get controllers | Controllers data. |
DEPENDENT | hpe.msa.get.controllers Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get controller statistics | Controllers statistics data. |
DEPENDENT | hpe.msa.get.controllerstatistics Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> |
HPE | Get disk groups | Disk groups data. |
DEPENDENT | hpe.msa.get.disks.groups Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get disk group statistics | Disk groups statistics data. |
DEPENDENT | hpe.msa.disks.get.groups.statistics Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get volumes | Volumes data. |
DEPENDENT | hpe.msa.get.volumes Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get volume statistics | Volumes statistics data. |
DEPENDENT | hpe.msa.get.volumes.statistics Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | Get method errors | A list of method errors from API requests. |
DEPENDENT | hpe.msa.get.errors Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Product ID | The product model identifier. |
DEPENDENT | hpe.msa.system.productid Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | System contact | The name of the person who administers the system. |
DEPENDENT | hpe.msa.system.contact Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | System information | A brief description of what the system is used for or how it is configured. |
DEPENDENT | hpe.msa.system.info Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | System location | The location of the system. |
DEPENDENT | hpe.msa.system.location Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | System name | The name of the storage system. |
DEPENDENT | hpe.msa.system.name Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Vendor name | The vendor name. |
DEPENDENT | hpe.msa.system.vendorname Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | System health | System health status. |
DEPENDENT | hpe.msa.system.health Preprocessing: - JSONPATH: ⛔️ON_FAIL: |
HPE | HPE MSA: Service ping | Check if HTTP/HTTPS service accepts TCP connections. |
SIMPLE | net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Get data | The discovered controller data. |
DEPENDENT | hpe.msa.get.controllers["{#CONTROLLER.ID}",data] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Get statistics data | The discovered controller statistics data. |
DEPENDENT | hpe.msa.get.controller_statistics["{#CONTROLLER.ID}",data] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Firmware version | Storage controller firmware version. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",firmware] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Part number | Part number of the controller. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",partnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: Serial number | Storage controller serial number. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",serialnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: Health | Controller health status. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Status | Storage controller status. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Disks | Number of disks in the storage system. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",disks] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Pools | Number of pools in the storage system. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",pools] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Controller [{#CONTROLLER.ID}]: Disk groups | Number of disk groups in the storage system. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",diskgroups] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: IP address | Controller network port IP address. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",ipaddress] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Controller [{#CONTROLLER.ID}]: Cache memory size | Controller cache memory size. |
DEPENDENT | hpe.msa.controllers.cache["{#CONTROLLER.ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Write utilization | Percentage of write cache in use, from 0 to 100. |
DEPENDENT | hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate | For the controller that owns the volume, the number of times the block to be read is found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate | For the controller that owns the volume, the number of times the block to be read is not found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate | For the controller that owns the volume, the number of times the block written to is found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate | For the controller that owns the volume, the number of times the block written to is not found in cache per second. |
DEPENDENT | hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: CPU utilization | Percentage of time the CPU is busy, from 0 to 100. |
DEPENDENT | hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: IOPS, total rate | Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: IOPS, read rate | Number of read operations per second. |
DEPENDENT | hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: IOPS, write rate | Number of write operations per second. |
DEPENDENT | hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Controller [{#CONTROLLER.ID}]: Data transfer rate: Total | The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: |
HPE | Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads | The data read rate, in bytes per second. |
DEPENDENT | hpe.msa.controllers.datatransfer.reads["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes | The data write rate, in bytes per second. |
DEPENDENT | hpe.msa.controllers.datatransfer.writes["{#CONTROLLER.ID}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Controller [{#CONTROLLER.ID}]: Uptime | Number of seconds since the controller was restarted. |
DEPENDENT | hpe.msa.controllers["{#CONTROLLER.ID}",uptime] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Get data | The discovered disk group data. |
DEPENDENT | hpe.msa.get.disks.groups["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Get statistics data | The discovered disk group statistics data. |
DEPENDENT | hpe.msa.get.disks.groups.statistics["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Disks count | Number of disks in the disk group. |
DEPENDENT | hpe.msa.disks.groups["{#NAME}",diskcount] Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: Pool space used | The percentage of pool capacity that the disk group occupies. |
DEPENDENT | hpe.msa.disks.groups.space["{#NAME}",poolutil] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h |
HPE | Disk group [{#NAME}]: Health | Disk group health. |
DEPENDENT | hpe.msa.disks.groups["{#NAME}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk group [{#NAME}]: Space free | The free space in the disk group. |
DEPENDENT | hpe.msa.disks.groups.space["{#NAME}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: Space total | The capacity of the disk group. |
DEPENDENT | hpe.msa.disks.groups.space["{#NAME}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: Space utilization | The space utilization percentage in the disk group. |
CALCULATED | hpe.msa.disks.groups.space["{#NAME}",util] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: 100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100 |
HPE | Disk group [{#NAME}]: RAID type | The RAID level of the disk group. |
DEPENDENT | hpe.msa.disks.groups.raid["{#NAME}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: Status | The status of the disk group: - CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down. - DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged. - FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down. - FTOL: Fault tolerant. - MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing. - OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost. - QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. - QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. - QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group. - QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups. - STOP: The disk group is stopped. - UNKN: Unknown. - UP: Up. The disk group is online and does not have fault-tolerant attributes. |
DEPENDENT | hpe.msa.disks.groups["{#NAME}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk group [{#NAME}]: IOPS, total rate | Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.disks.groups.iops.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Average response time: Total | Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset. |
DEPENDENT | hpe.msa.disks.groups.avgrsptime["{#NAME}",total] Preprocessing: - JSONPATH: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: Average response time: Read | Average response time for all read operations, calculated over the interval since these statistics were last requested or reset. |
DEPENDENT | hpe.msa.disks.groups.avgrsptime["{#NAME}",read] Preprocessing: - JSONPATH: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: Average response time: Write | Average response time for all write operations, calculated over the interval since these statistics were last requested or reset. |
DEPENDENT | hpe.msa.disks.groups.avgrsptime["{#NAME}",write] Preprocessing: - JSONPATH: - MULTIPLIER: |
HPE | Disk group [{#NAME}]: IOPS, read rate | Number of read operations per second. |
DEPENDENT | hpe.msa.disks.groups.iops.read["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Disk group [{#NAME}]: IOPS, write rate | Number of write operations per second. |
DEPENDENT | hpe.msa.disks.groups.iops.write["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Disk group [{#NAME}]: Data transfer rate: Total | The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Disk group [{#NAME}]: Data transfer rate: Reads | The data read rate, in bytes per second. |
DEPENDENT | hpe.msa.disks.groups.datatransfer.reads["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Disk group [{#NAME}]: Data transfer rate: Writes | The data write rate, in bytes per second. |
DEPENDENT | hpe.msa.disks.groups.datatransfer.writes["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Pool [{#NAME}]: Get data | The discovered pool data. |
DEPENDENT | hpe.msa.get.pools["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Pool [{#NAME}]: Health | Pool health. |
DEPENDENT | hpe.msa.pools["{#NAME}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Pool [{#NAME}]: Space free | The free space in the pool. |
DEPENDENT | hpe.msa.pools.space["{#NAME}",free] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Pool [{#NAME}]: Space total | The capacity of the pool. |
DEPENDENT | hpe.msa.pools.space["{#NAME}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Pool [{#NAME}]: Space utilization | The space utilization percentage in the pool. |
CALCULATED | hpe.msa.pools.space["{#NAME}",util] Preprocessing: - DISCARDUNCHANGEDHEARTBEAT: Expression: 100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100 |
HPE | Volume [{#NAME}]: Get data | The discovered volume data. |
DEPENDENT | hpe.msa.get.volumes["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Get statistics data | The discovered volume statistics data. |
DEPENDENT | hpe.msa.get.volumes.statistics["{#NAME}",data] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Space allocated | The amount of space currently allocated to the volume. |
DEPENDENT | hpe.msa.volumes.space["{#NAME}",allocated] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: Space total | The capacity of the volume. |
DEPENDENT | hpe.msa.volumes.space["{#NAME}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Volume [{#NAME}]: IOPS, total rate | Total input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.volumes.iops.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: IOPS, read rate | Number of read operations per second. |
DEPENDENT | hpe.msa.volumes.iops.read["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: IOPS, write rate | Number of write operations per second. |
DEPENDENT | hpe.msa.volumes.iops.write["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Data transfer rate: Total | The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart. |
DEPENDENT | hpe.msa.volumes.data_transfer.total["{#NAME}",rate] Preprocessing: - JSONPATH: |
HPE | Volume [{#NAME}]: Data transfer rate: Reads | The data read rate, in bytes per second. |
DEPENDENT | hpe.msa.volumes.datatransfer.reads["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Volume [{#NAME}]: Data transfer rate: Writes | The data write rate, in bytes per second. |
DEPENDENT | hpe.msa.volumes.datatransfer.writes["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGE PER_SECOND |
HPE | Volume [{#NAME}]: Cache: Read hits, rate | For the controller that owns the volume, the number of times the block to be read is found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.read.hits["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Cache: Read misses, rate | For the controller that owns the volume, the number of times the block to be read is not found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.read.misses["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Cache: Write hits, rate | For the controller that owns the volume, the number of times the block written to is found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.write.hits["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Volume [{#NAME}]: Cache: Write misses, rate | For the controller that owns the volume, the number of times the block written to is not found in cache per second. |
DEPENDENT | hpe.msa.volumes.cache.write.misses["{#NAME}",rate] Preprocessing: - JSONPATH: - CHANGEPERSECOND |
HPE | Enclosure [{#DURABLE.ID}]: Get data | The discovered enclosure data. |
DEPENDENT | hpe.msa.get.enclosures["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Enclosure [{#DURABLE.ID}]: Health | Enclosure health. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Status | Enclosure status. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",status] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Midplane serial number | Midplane serial number. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",midplaneserialnumber] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Part number | Enclosure part number. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",partnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Enclosure [{#DURABLE.ID}]: Model | Enclosure model. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",model] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Enclosure [{#DURABLE.ID}]: Power | Enclosure power in watts. |
DEPENDENT | hpe.msa.enclosures["{#DURABLE.ID}",power] Preprocessing: - JSONPATH: |
HPE | Power supply [{#DURABLE.ID}]: Get data | The discovered power supply data. |
DEPENDENT | hpe.msa.get.power_supplies["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Power supply [{#DURABLE.ID}]: Health | Power supply health status. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ON FAIL:CUSTOM_VALUE -> 4 - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Power supply [{#DURABLE.ID}]: Status | Power supply status. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",status] Preprocessing: - JSONPATH: ⛔️ON FAIL:CUSTOM_VALUE -> 4 - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Power supply [{#DURABLE.ID}]: Part number | Power supply part number. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",partnumber] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Power supply [{#DURABLE.ID}]: Serial number | Power supply serial number. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",serialnumber] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Power supply [{#DURABLE.ID}]: Temperature | Power supply temperature. |
DEPENDENT | hpe.msa.powersupplies["{#DURABLE.ID}",temperature] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h |
HPE | Port [{#NAME}]: Get data | The discovered port data. |
DEPENDENT | hpe.msa.get.ports["{#NAME}",,data] Preprocessing: - JSONPATH: |
HPE | Port [{#NAME}]: Health | Port health status. |
DEPENDENT | hpe.msa.ports["{#NAME}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Port [{#NAME}]: Status | Port status. |
DEPENDENT | hpe.msa.ports["{#NAME}",status] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Port [{#NAME}]: Type | Port type. |
DEPENDENT | hpe.msa.ports["{#NAME}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Fan [{#DURABLE.ID}]: Get data | The discovered fan data. |
DEPENDENT | hpe.msa.get.fans["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Fan [{#DURABLE.ID}]: Health | Fan health status. |
DEPENDENT | hpe.msa.fans["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Fan [{#DURABLE.ID}]: Status | Fan status. |
DEPENDENT | hpe.msa.fans["{#DURABLE.ID}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Fan [{#DURABLE.ID}]: Speed | Fan speed (revolutions per minute). |
DEPENDENT | hpe.msa.fans["{#DURABLE.ID}",speed] Preprocessing: - JSONPATH: |
HPE | Disk [{#DURABLE.ID}]: Get data | The discovered disk data. |
DEPENDENT | hpe.msa.get.disks["{#DURABLE.ID}",data] Preprocessing: - JSONPATH: |
HPE | Disk [{#DURABLE.ID}]: Health | Disk health status. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",health] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Temperature status | Disk temperature status. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",temperaturestatus] Preprocessing: - JSONPATH: ⛔️ON FAIL:DISCARD_VALUE -> - INRANGE: ⛔️ONFAIL: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Temperature | Temperature of the disk. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",temperature] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Type | Disk type: SAS: Enterprise SAS spinning disk. SAS MDL: Midline SAS spinning disk. SSD SAS: SAS solit-state disk. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",type] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Disk group | If the disk is in a disk group, the disk group name. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",group] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Storage pool | If the disk is in a pool, the pool name. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",pool] Preprocessing: - JSONPATH: ⛔️ONFAIL: - DISCARDUNCHANGED_HEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Vendor | Disk vendor. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",vendor] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Model | Disk model. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",model] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: |
HPE | Disk [{#DURABLE.ID}]: Serial number | Disk serial number. |
DEPENDENT | hpe.msa.disks["{#DURABLE.ID}",serialnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | Disk [{#DURABLE.ID}]: Space total | Total size of the disk. |
DEPENDENT | hpe.msa.disks.space["{#DURABLE.ID}",total] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - MULTIPLIER: |
HPE | Disk [{#DURABLE.ID}]: SSD life left | The percentage of disk life remaining. |
DEPENDENT | hpe.msa.disks.ssd["{#DURABLE.ID}",lifeleft] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1h |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Get data | The discovered FRU data. |
DEPENDENT | hpe.msa.get.frus["{#ENCLOSURE.ID}:{#LOCATION}",data] Preprocessing: - JSONPATH: |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Status | {#DESCRIPTION}. FRU status: Absent: Component is not present. Fault: At least one subcomponent has a fault. Invalid data: For a power supply module, the EEPROM is improperly programmed. OK: All subcomponents are operating normally. Not available: Status is not available. |
DEPENDENT | hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status] Preprocessing: - JSONPATH: - DISCARDUNCHANGEDHEARTBEAT: - JAVASCRIPT: |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Part number | {#DESCRIPTION}. Part number of the FRU. |
DEPENDENT | hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",partnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
HPE | FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Serial number | {#DESCRIPTION}. FRU serial number. |
DEPENDENT | hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",serialnumber] Preprocessing: - JSONPATH: - DISCARD UNCHANGED_HEARTBEAT:1d |
Zabbix raw items | HPE MSA: Get data | The JSON with result of API requests. |
SCRIPT | hpe.msa.get.data Expression: The text is too long. Please see the template. |
Name | Description | Expression | Severity | Dependencies and additional info |
---|---|---|---|---|
There are errors in method requests to API | There are errors in method requests to API. |
length(last(/HPE MSA 2040 Storage by HTTP/hpe.msa.get.errors))>0 |
AVERAGE | Depends on: - Service is down or unavailable |
System health is in degraded state | System health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=1 |
WARNING | |
System health is in fault state | System health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=2 |
AVERAGE | |
System health is in unknown state | System health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=3 |
INFO | |
Service is down or unavailable | HTTP/HTTPS service is down or unable to establish TCP connection. |
max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0 |
HIGH | |
Controller [{#CONTROLLER.ID}]: Controller health is in degraded state | Controller health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1 |
WARNING | Depends on: - Controller [{#CONTROLLER.ID}]: Controller is down |
Controller [{#CONTROLLER.ID}]: Controller health is in fault state | Controller health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2 |
AVERAGE | Depends on: - Controller [{#CONTROLLER.ID}]: Controller is down |
Controller [{#CONTROLLER.ID}]: Controller health is in unknown state | Controller health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3 |
INFO | Depends on: - Controller [{#CONTROLLER.ID}]: Controller is down |
Controller [{#CONTROLLER.ID}]: Controller is down | The controller is down. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1 |
HIGH | |
Controller [{#CONTROLLER.ID}]: High CPU utilization | Controller CPU utilization is too high. The system might be slow to respond. |
min(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |
WARNING | |
Controller [{#CONTROLLER.ID}]: Controller has been restarted | The controller uptime is less than 10 minutes. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m |
WARNING | |
Disk group [{#NAME}]: Disk group health is in degraded state | Disk group health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1 |
WARNING | |
Disk group [{#NAME}]: Disk group health is in fault state | Disk group health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2 |
AVERAGE | |
Disk group [{#NAME}]: Disk group health is in unknown state | Disk group health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3 |
INFO | |
Disk group [{#NAME}]: Disk group space is low | Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available). |
min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"} |
WARNING | Depends on: - Disk group [{#NAME}]: Disk group space is critically low |
Disk group [{#NAME}]: Disk group space is critically low | Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available). |
min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"} |
AVERAGE | |
Disk group [{#NAME}]: Disk group is fault tolerant with a down disk | The disk group is online and fault tolerant, but some of it's disks are down. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1 |
AVERAGE | |
Disk group [{#NAME}]: Disk group has damaged disks | The disk group is online and fault tolerant, but some of it's disks are damaged. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9 |
AVERAGE | |
Disk group [{#NAME}]: Disk group has missing disks | The disk group is online and fault tolerant, but some of it's disks are missing. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is offline | Either the disk group is using offline initialization, or it's disks are down and data may be lost. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=3 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined critical | The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=4 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined offline | The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined unsupported | The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is quarantined with an inaccessible disk | The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=6 |
AVERAGE | |
Disk group [{#NAME}]: Disk group is stopped | The disk group is stopped. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=7 |
AVERAGE | |
Disk group [{#NAME}]: Disk group status is critical | The disk group is online but isn't fault tolerant because some of its disks are down. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=2 |
AVERAGE | |
Pool [{#NAME}]: Pool health is in degraded state | Pool health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1 |
WARNING | |
Pool [{#NAME}]: Pool health is in fault state | Pool health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2 |
AVERAGE | |
Pool [{#NAME}]: Pool health is in unknown state | Pool [{#NAME}] health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3 |
INFO | |
Pool [{#NAME}]: Pool space is low | Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available). |
min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"} |
WARNING | Depends on: - Pool [{#NAME}]: Pool space is critically low |
Pool [{#NAME}]: Pool space is critically low | Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available). |
min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"} |
AVERAGE | |
Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state | Enclosure health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1 |
WARNING | |
Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state | Enclosure health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state | Enclosure health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3 |
INFO | |
Enclosure [{#DURABLE.ID}]: Enclosure has critical status | Enclosure has critical status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=2 |
HIGH | |
Enclosure [{#DURABLE.ID}]: Enclosure has warning status | Enclosure has warning status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3 |
WARNING | |
Enclosure [{#DURABLE.ID}]: Enclosure is unavailable | Enclosure is unavailable. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7 |
HIGH | |
Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable | Enclosure is unrecoverable. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4 |
HIGH | |
Enclosure [{#DURABLE.ID}]: Enclosure has unknown status | Enclosure has unknown status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6 |
INFO | |
Power supply [{#DURABLE.ID}]: Power supply health is in degraded state | Power supply health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1 |
WARNING | |
Power supply [{#DURABLE.ID}]: Power supply health is in fault state | Power supply health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Power supply [{#DURABLE.ID}]: Power supply health is in unknown state | Power supply health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3 |
INFO | |
Power supply [{#DURABLE.ID}]: Power supply has error status | Power supply has error status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2 |
AVERAGE | |
Power supply [{#DURABLE.ID}]: Power supply has warning status | Power supply has warning status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1 |
WARNING | |
Power supply [{#DURABLE.ID}]: Power supply has unknown status | Power supply has unknown status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4 |
INFO | |
Port [{#NAME}]: Port health is in degraded state | Port health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1 |
WARNING | |
Port [{#NAME}]: Port health is in fault state | Port health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2 |
AVERAGE | |
Port [{#NAME}]: Port health is in unknown state | Port health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3 |
INFO | |
Port [{#NAME}]: Port has error status | Port has error status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2 |
AVERAGE | |
Port [{#NAME}]: Port has warning status | Port has warning status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1 |
WARNING | |
Port [{#NAME}]: Port has unknown status | Port has unknown status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4 |
INFO | |
Fan [{#DURABLE.ID}]: Fan health is in degraded state | Fan health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1 |
WARNING | |
Fan [{#DURABLE.ID}]: Fan health is in fault state | Fan health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Fan [{#DURABLE.ID}]: Fan health is in unknown state | Fan health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3 |
INFO | |
Fan [{#DURABLE.ID}]: Fan has error status | Fan has error status. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1 |
AVERAGE | |
Fan [{#DURABLE.ID}]: Fan is missing | Fan is missing. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3 |
INFO | |
Fan [{#DURABLE.ID}]: Fan is off | Fan is off. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2 |
WARNING | |
Disk [{#DURABLE.ID}]: Disk health is in degraded state | Disk health is in degraded state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1 |
WARNING | |
Disk [{#DURABLE.ID}]: Disk health is in fault state | Disk health is in fault state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2 |
AVERAGE | |
Disk [{#DURABLE.ID}]: Disk health is in unknown state | Disk health is in unknown state. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3 |
INFO | |
Disk [{#DURABLE.ID}]: Disk temperature is high | Disk temperature is high. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3 |
WARNING | |
Disk [{#DURABLE.ID}]: Disk temperature is critically high | Disk temperature is critically high. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2 |
AVERAGE | |
Disk [{#DURABLE.ID}]: Disk temperature is unknown | Disk temperature is unknown. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4 |
INFO | |
FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU status is Degraded or Fault | FRU status is Degraded or Fault. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=1 |
AVERAGE | |
FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU ID data is invalid | The FRU ID data is invalid. The FRU's EEPROM is improperly programmed. |
last(/HPE MSA 2040 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=0 |
WARNING |
Please report any issues with the template at https://support.zabbix.com
You can also provide feedback, discuss the template or ask for help with it at ZABBIX forums.