All articles
May 2, 202615 min read

Why your AP keeps rebooting at night: PoE budgets, midspan injectors, and the loose-pair problem

"The Wi-Fi was down again at 3 AM" is almost never a Wi-Fi problem. It is a PoE budget overrun, a heat-soaked injector, a mismatched PoE class, or a single loose pair in a punchdown. Here is the diagnostic order we run on every service call.

UniFiPoENetworkingTroubleshootingCabling

Few support tickets are as maddening as “the Wi-Fi was down again at 3 AM.” The homeowner wakes up to a stalled doorbell push notification or a thermostat that lost cloud sync, checks the dashboard, and sees that one access point rebooted in the middle of the night for the third time this month. By morning, everything works again. The router logs say nothing useful. The internet is fine.

When we get called in, it’s almost always one of three things: the PoE switch is over budget, an injector or PoE port is failing under thermal stress, or there’s a single bad RJ45 pair that re-negotiates every few hours. None of those show up clearly in the UniFi dashboard until you know what to look for. This is the field guide we use.

Why APs reboot at night specifically

The pattern of “mostly fine during the day, drops at night” is real, and it has physical causes:

  • The house is quieter and cooler. HVAC fans are off, attic temperatures drop, and a borderline component that was at 95% headroom in afternoon heat now contracts and loses contact at 2 AM.
  • The network is also quieter. Cloud backups, Sonos firmware updates, camera nightly clip uploads, and Apple/Google device pushes all batch to the small hours. A PoE budget that was barely OK in the evening now spikes when six APs and four cameras all start transmitting at once.
  • You’re asleep, so you only see the survivors. A drop at 3:14 AM that’s recovered by 3:18 AM looks identical in a phone notification log to a drop at noon that’s still ongoing. People only notice the noisy daytime drops.

None of those would matter on a perfectly healthy install. They show up because something is already marginal — and night-time is when the marginal thing finally gives up.

Cause 1: PoE budget overrun

A PoE switch has a finite power budget, expressed in watts. A UniFi USW Pro 24 PoE has 400 W. A USW Pro 48 PoE has 600 W. A USW Enterprise 8 PoE only has 240 W. The total wattage of every connected device must stay under that number, with headroom — not at it.

The trick is that PoE devices don’t draw a steady wattage. A U7 Pro Max might idle at 8 W and spike to 25 W during heavy 6 GHz traffic. A G5 Pro camera might idle at 6 W and spike to 14 W when the IR illuminator kicks on. Add up the spike values, not the idle values, when sizing the budget. A typical AP/camera mix on a residential install:

  • U6-Pro: 12.5 W (PoE+, 802.3at)
  • U6 Enterprise: 22 W (PoE+, 802.3at)
  • U7 Pro: 21 W (PoE+, 802.3at)
  • U7 Pro Max: 25 W (PoE+, 802.3at)
  • U7 Pro XGS: 31 W (PoE++, 802.3bt)
  • G5 Bullet: 5 W (PoE)
  • G5 Pro: 14 W (PoE+)
  • G5 Turret Ultra: 5 W (PoE)
  • G6 Bullet / Turret: 6–8 W (PoE)
  • UniFi G5 Doorbell Pro: 8 W (PoE)

Pre-2026 firmware on some UniFi switches handled budget overrun by simply rebooting the lowest- priority PoE port. Newer firmware handles it more gracefully, but the symptom — one specific AP going dark periodically — is the same. Check the switch’s “PoE Used” metric in the dashboard. If it’s above 70% of the rated budget at idle, you’re going to see drops under load.

The fix on a real install is to either move some load to a second PoE switch, swap to a higher- budget switch, or step down the radios on the affected APs. We covered the broader management question in network switches: managed vs unmanaged.

The 802.3af / at / bt cheat sheet

Three PoE standards exist, and confusion between them is the second most common cause of AP reboots:

  • 802.3af (PoE): 15.4 W at the source, 12.95 W delivered to the device. Adequate for older APs and most cameras.
  • 802.3at (PoE+): 30 W at the source, 25.5 W delivered. Required by U7 Pro, U7 Pro Max, U6 Enterprise, and most modern APs.
  • 802.3bt Type 3 (PoE++): 60 W at the source, 51 W delivered. Required by U7 Pro XGS and the larger U7 Pro Wall when fully lit.
  • 802.3bt Type 4 (PoE++): 90 W at the source, 71 W delivered. Used by some high-power devices and not common in homes yet.

A U7 Pro Max plugged into a non-PoE+ port will appear to power up, run on partial power, and reboot under load every few hours when its current draw hits a sustained ceiling. The dashboard doesn’t scream “wrong PoE class” — it just shows a flapping AP. Check the port’s configured PoE mode in your switch config. We see this every month on installs where an older USW Lite 8 PoE was reused with a newer AP and nobody updated the port.

We covered the basics of PoE in the PoE explained post; this post is the field-failure follow-up.

Cause 2: a single overheating injector

Mid-span PoE injectors — the brick-shaped device with a wall-wart power supply, an LAN-in port, and a PoE-out port — are everywhere on retrofit installs because they’re cheap and let you add an AP without replacing the switch. They’re also the single most common point of heat-related failure we see.

  • Cheap injectors don’t have proper thermal derating. As the room temperature rises through the day, the injector’s internal converter loses efficiency. By 11 PM, the unit has heat-soaked enough to trip its over- temperature protection. The AP drops, the unit cools, the AP comes back, the cycle repeats.
  • Injectors stuffed inside a closed AV cabinet with no ventilation fail faster than ones on open shelving.
  • Injectors daisy-chained off a power strip with other heat-producing electronics share thermal fate. We’ve seen six injectors stacked in a corner of a Park City basement; the middle one was 30° hotter than the outer two and rebooted nightly.
  • Some injectors lie about their PoE class. They advertise PoE+ but only deliver PoE. The AP boots, runs at reduced power, and crashes under traffic. Test the actual delivered voltage and current with a PoE tester before blaming the AP.

The fix is to retire injectors entirely and put PoE at the switch level. On every new install we do, injectors are temporary scaffolding only — they exist for 48 hours during a cutover and then get removed. If a permanent injector is the only option (rare), it lives somewhere ventilated, on its own circuit, with a cheap thermal label so the next tech can see if it’s been cooking.

Cause 3: the loose-pair problem

This is the cause that takes the longest to find, and it’s the cause we end up writing about most because the symptoms are so subtle. A terminated RJ45 jack has eight conductors in four twisted pairs. Each pair carries one of the Ethernet differential signals: pair 1/2 (TX), pair 3/6 (RX), pair 4/5, pair 7/8. PoE typically rides on pairs 4/5 and 7/8 (Mode B) or shares with data on pairs 1/2 and 3/6 (Mode A).

If one of the eight wires is loose in the punchdown or RJ45 crimp — not disconnected, just loose — the link still trains. It might even negotiate at a gigabit. But under thermal expansion and contraction across a Utah day-night cycle, that loose wire intermittently breaks contact. The link renegotiates. PoE re-asserts. The AP reboots.

Symptoms of the loose-pair problem:

  • The AP reboots roughly the same time every day, or every couple of days.
  • The dashboard shows the AP renegotiating from 1 Gbps to 100 Mbps and back, or worse, dropping PoE briefly and re-powering.
  • The cable test in UniFi shows all pairs connected at full length, but with one pair showing a slightly different distance than the others — that delta is a kinked or loose wire.
  • Re-seating the patch cable at the rack briefly fixes it, then it comes back.

The fix is to re-terminate both ends. Punch down the keystone or RJ45 from scratch, with proper pair separation and twist preservation right up to the punch block. Then test with a real cable certifier (Fluke MicroScanner, T3 Innovations, or similar) to confirm all four pairs have consistent length and no resistance imbalance. We covered the cable- category basics in Cat5e vs Cat6 vs Cat6A.

If you don’t have a certifier, the next-best test is to swap the cable end-to-end with a known- good patch cable that bypasses the run. If the problem follows the original cable, you’ve found it. If it stays at the AP, the AP is the fault.

Cause 4: the switch port itself is failing

Less common than the first three, but real. A single PoE port on an aging switch can develop a bad MOSFET on its power-injection circuit. The port works, delivers power, occasionally renegotiates, and slowly gets worse over months. The cleanest test: move the AP’s patch cable to a different PoE port on the same switch. If the problem moves with it, the cable or AP is the fault. If it stays on the original port with a different AP, retire the port.

On UniFi gear we’ve replaced exactly two switches in the last three years for this kind of failure. It’s rare. But before we replaced them, we’d already chased loose pairs and injectors for weeks.

Reading the UniFi dashboard for these symptoms

The dashboard shows everything you need, but only if you know which graph to open. We covered the broader read in reading your UniFi dashboard; here’s the AP-reboot-specific subset.

  • Devices → AP → Insights → Uptime. A clean AP shows weeks. An unhealthy AP shows reboots every 1–3 days. Pattern is what matters.
  • Switch → Port → PoE Used vs PoE Max. Look at the port for the affected AP. Is it pulling near its class limit? Is the switch globally near its budget?
  • Switch → Port → Cable Test. Run it from the dashboard. If all four pairs show the same length, good. If one pair shows “short” or a different distance, that’s your loose pair.
  • Insights → AP Channel Utilization at 3 AM. If utilization stays low and the AP is still rebooting, you’re in pure-power-issue territory, not a Wi-Fi issue.
  • Topology → AP link speed. Should be 1 Gbps (or 2.5/10 Gbps for newer APs). If it shows 100 Mbps for a wired AP, you have a cabling fault, full stop.

The diagnostic order we run on a service call

  1. Confirm the AP is wired (not mesh-uplinked) and powered from a known-good port. Mesh-uplinked APs have their own failure modes; this post isn’t about those.
  2. Pull the dashboard PoE budget. If global PoE used is over 70% of switch max, address that first.
  3. Verify the switch port is configured for the correct PoE class for the connected AP.
  4. If an injector is in the path, remove it and test from the switch directly. If the problem goes away, the injector was the cause.
  5. Run the UniFi cable test on the affected port. Look for any pair that isn’t consistent with the others.
  6. Swap the patch cable, then if the problem persists, swap the keystone, then if it persists, swap the entire run with a known-good temporary line.
  7. If the run is healthy and PoE is healthy, swap the AP itself with a known-good unit. If the problem follows, RMA the AP.
  8. If the problem stays on the port no matter what AP is plugged in, retire the switch port (or the switch).

Why this is more common in Utah than people expect

Three local factors make these failures more visible here than in milder climates:

  • Temperature swings. A 60- degree day-to-night swing in spring and fall is routine on the Wasatch Front. Cables in unconditioned attic spaces, where most APs actually run, expand and contract more than they would in a temperate climate. Marginal terminations fail sooner.
  • Dry, dusty air. Static and fine dust accumulate inside switch enclosures much faster than at sea level. We see PoE ports go intermittent on switches that have been running clean for ten years in Florida or Pacific Northwest installs.
  • Big houses, long runs. A 7,000-square-foot Park City mountain home routinely has 250′ cable runs from the rack to the most remote AP. PoE voltage drop on a 250′ run is real — you can lose 5–7% of the source voltage just in the cable. A budget that’s adequate for 100′ runs gets tight at 250′.

Prevention: how we wire to avoid this

On new installs, the AP-reboot class of problem is preventable. The decisions we make during the new-construction pre-wire directly determine whether the homeowner is calling us in three years for a phantom reboot:

  • Run two Cat6A drops to every AP location, not one. The second is for failure replacement, no pulling drywall later.
  • Terminate to keystones at both ends, not RJ45 crimps. Punchdowns are dramatically more reliable over decades than field-crimped RJ45.
  • Size the PoE switch with at least 30% headroom above the day-one device count, on the assumption that future APs and cameras will be higher-power than today’s.
  • Avoid injectors as a permanent solution. Specify a PoE+ or PoE++ switch from the start.
  • On runs over 200′, use Cat6A and verify the run length is below the 100 m PoE specification.
  • Certify every cable with a real tester before closing the wall. We won’t close drywall over a run that hasn’t passed a Fluke test.

Bottom line

When an AP reboots at 3 AM, it’s never the Wi-Fi. It’s the cable, the power, or the port. PoE budget overruns, mismatched PoE classes, heat-soaked injectors, loose pairs in a punchdown, and aging switch ports cover almost every case we’ve seen. The dashboard tells you which one if you know which graph to open.

The two-minute version: pull the PoE budget, run the cable test, swap the patch cable, retire any injector. That’s 80% of these calls.

Keystone Integration designs and supports UniFi networks across Holladay, Park City, Alpine, Draper, Lehi, and the rest of the Wasatch Front — with proper PoE budgeting, certified cabling, and AP installs that don’t reboot at night. See the full service list or get in touch if your dashboard is showing a phantom reboot you can’t explain.