IR300 connect/disconnect are not working with mqtt integration

Hi, 
From https://iot.inhandnetworks.com/
under integrations, I created mqtt event type with event type connect, disconnect, device.info, series, lbs to test a device which is under Gateways (selected a single device) and I tried to test rebooting the device from Gateways page in order to see the disconnect and connect message from mqttx.
However, disconnect or/and connect messages are not showing in the mqttx even though i have subscribed to all ex. inhand/# and inhand/+/+ (for my uses case where its inhand/device_id/event 
here is my setting for the mqtt integration from inhand:
Protocol / Host
Client ID
Topic





Qos: 0 
im getting "event":"series",  "event":"lbs" but not the disconnect and/or connect.
 Did you change the name for the event type or how can i get the connect/disconnect messages on the subscription when the device gets connected or disconnected?
Please let me know if you need more information from us to resolve this issue. 
Thank you

This is likely a timing issue, not a configuration error.

When you reboot the IR302, it often restarts and reconnects faster than the InCloud platform's Heartbeat Timeout (Keepalive). If the device returns before the cloud marks it as "Offline," no disconnect or connect event is generated because the platform considers the session continuous.

How to Fix the Test

To trigger these events, the cloud must explicitly see the device drop offline.

  1. Power off the device (or unplug the WAN cable).

  2. Wait at least 5–10 minutes. (The default heartbeat tolerance is often 3-5 minutes).

  3. Check MQTTX: You should receive the disconnect message once the cloud officially marks the device offline.

  4. Power On: Turn the device back on. You will receive the connect message once it re-establishes the tunnel.

Why this happens:
The series and lbs events work because they are device-initiated (pushed by the router).
The connect/disconnect events are platform-initiated (generated by the server) and depend entirely on the server detecting a timeout.

Summary: Your settings are correct. The reboot was just too fast for the cloud to care. Try a longer power-down.