All articles
May 7, 202612 min read

Why your AP keeps rebooting at night: PoE budgets, midspan injectors, and the loose-pair problem

The Wi-Fi was down again at 3 AM, and the router shows nothing. Here is what actually causes UniFi APs to reboot overnight, how to diagnose PoE budget problems, midspan injector failures, and loose-pair termination issues, and how to fix them without re-pulling cable.

UniFiPoEAccess PointsNetworkingTroubleshooting

The call we get more than any other in the spring and fall: “The Wi-Fi was down again at 3 AM. By the time I got up it was working. The router shows no outage. What’s going on?” Nine times out of ten, the router is fine. The access points are rebooting on their own — one at a time, briefly, in the middle of the night — and the symptom you see is a sleeping phone losing its connection just long enough to drop a notification or kill a download.

This post walks through the three causes we actually find when we trace these out. They’re almost always one of the same handful of issues: PoE budget math gone wrong, a tired midspan injector, or a single loose pair on a wall jack the previous installer punched down on a Friday afternoon. None of these show up on a speed test. All of them show up at 2:47 AM.

Why nighttime is the giveaway

APs reboot at night for a specific reason: that’s when the house is quietest and the radios drop into low-power background scanning. The PoE current draw on a U7 Pro Max with no clients connected is about half what it is during a busy 8 PM streaming window. So why do they fall over then instead of during the day? Two reasons.

  • Controller-driven background work. UniFi schedules firmware checks, channel re-scans, and statistics rollups in the small hours by default. Each of these creates a brief load spike on the AP’s CPU and a corresponding small spike in PoE draw. If the AP is barely getting enough power to begin with, that spike is what tips it over.
  • Thermal cycling on the cable run. Attic and exterior runs cool 20–40°F at night during a typical Utah shoulder season. Copper pairs that are marginally seated in an RJ45 contract just enough to lose contact intermittently. The AP sees a CRC error storm, the link bounces, and the device reboots itself.

If you’ve been blaming the cloud, your ISP, or the firmware — almost always, it’s the physical layer. Here’s how to find which part.

The PoE budget problem

Every PoE switch has a total power budget across all ports, separate from the per-port maximum. A 24-port UniFi Pro PoE switch is rated at 400 W total. That sounds like plenty until you start adding up what you’re actually pulling.

Real-world worst-case draws we measure on installs:

  • U7 Pro Max AP: ~22–26 W under load (PoE+)
  • U6 Pro AP: ~13 W (PoE+)
  • G5 Bullet camera with IR full-on: ~9 W
  • G5 Pro camera with smart-detect re-encode: ~12–15 W
  • G5 PTZ in motion with IR: ~25 W (PoE++)
  • UniFi Access reader/hub: ~6–9 W
  • UDM Pro acting as a powered device upstream: not relevant, but adds heat

A modest install — six APs, eight cameras, two Access readers — can pull 240–280 W routinely, with peaks up close to 350 W when a PTZ is panning, a smart-detection event fires on three cameras at once, and the controller pushes a firmware heartbeat. On a 400 W switch you’re fine on paper. In practice, the switch will start throttling individual ports under sustained load, and the lowest-priority ports drop first.

See our PoE explained post for the 802.3af / at / bt distinctions; the short version for this discussion is that a single Wi-Fi 7 AP wants PoE+ (802.3at, 30 W) and most U7 Pro Max units actually negotiate PoE++ (802.3bt) for full power. If your switch advertises “PoE+” without the bt suffix, your top-tier APs may be running at reduced radio power and you’ll never see the warning unless you check.

The fix: budget math you can do on the back of an envelope

Add up the worst-case wattage of every PoE device on the switch. Multiply by 1.25 for headroom. If that number exceeds 70% of the switch’s rated budget, plan a bigger switch or a second one. The 70% rule keeps you out of trouble during firmware updates, transient PTZ moves, and IR storms when deer wander through three camera fields at the same time.

On a UniFi Pro switch, the dashboard shows live PoE consumption per port. We pull that data on every site visit — if a port is averaging more than 80% of its negotiated maximum, that AP or camera is a brownout candidate. Our UniFi dashboard guide walks through where these numbers live in the UI.

The midspan injector problem

If your install uses individual PoE injectors instead of (or in addition to) a PoE switch, you’ve got a separate failure mode. Midspan injectors are little wall-warts that sit between a non-PoE switch and a PoE device. They’re cheap, convenient, and a very common source of intermittent late-night reboots.

  • Heat death. Cheap injectors run warm. Stuff one behind a media console with no airflow, and over 18 months the capacitors dry out. The injector still works during the day, but can’t deliver full power during a 3 AM background scan when the AP wants its peak.
  • Wrong PoE class. An AP that negotiated PoE+ to a switch will fall back to 802.3af (15 W) on a passive injector that can’t do classification. The AP comes up, works at reduced radio power, and reboots when the controller pushes a config that the underpowered radio can’t support.
  • Daisy-chained injectors. We occasionally find installs where a previous contractor stacked two injectors in series to “extend” a long run. This is not a thing. Don’t do it. The voltage drop on each pass and the lack of negotiation handoff means the device on the far end is running on whatever it can scavenge.

If your install has more than two midspan injectors, the right answer is almost always to replace them with a properly-sized PoE switch. See our post on managed vs unmanaged switches for what that looks like in practice. A 16-port UniFi PoE switch is cheaper than four good injectors and gives you the dashboard visibility to actually see the problem when it starts.

The loose-pair problem

This is the one that masquerades as everything else. Symptoms: random reboots, random throughput drops, the link negotiates 100Mbit instead of 1Gbit one morning out of nowhere, smart-detection clips arrive with frame skips on one camera but not its neighbor on the same switch.

Cause: one of the eight conductors in the cable is making intermittent contact — a loose RJ45 pin, a punch-down that didn’t seat, a kinked pair inside the wall, a staple that pinched during framing. Gigabit Ethernet uses all four pairs; PoE uses two of them (or all four on PoE++). Lose one conductor and the link bounces between full-duplex 1Gbit, half-duplex, and 100Mbit until it finally drops out and re-negotiates.

We see this most often on:

  • Field-terminated RJ45 ends crimped on stranded cable instead of solid — the wrong crystal-and-conductor combination is a slow- motion failure that can take a year to manifest.
  • Wall jacks where the previous installer punched down on a hot day and the PVC jacket cold-flowed away from the IDC contact over the next winter.
  • Outdoor runs with Cat6 instead of Cat6A where moisture wicked into a non-shielded jacket and corroded a single pair.
  • Old coax-to-Ethernet runs that someone repurposed by punching down onto a coax-jacketed cable that was never meant to carry data.

Diagnosing it: what we actually do on a service call

There are three tools and one technique we use to pinpoint which of these is the culprit. None of them require a $5,000 cable certifier.

  1. Pull the UniFi event log for the AP. Look at the “disconnected” reason codes. “Power loss” or “PoE fault” points at the switch or injector. “Link down” followed by a renegotiate points at the cable. The pattern tells you where to look.
  2. Check the per-port PoE telemetry. On a UniFi Pro switch, the port detail page shows voltage, current, and class. A port that’s at 47.5 V at 8 PM and 50.2 V at 3 AM has a load that’s collapsing as the AP boots back up. A port stuck at PoE class 0 or 3 that should be class 4 didn’t negotiate properly.
  3. Run a TDR test. Most managed switches (including the UniFi line) include Time-Domain Reflectometer cable diagnostics on each port. It tells you the length of each pair, where any open is, and which pair is broken. A pair that reads “open at 14 m” on a cable you ran 22 m is your culprit. The TDR is not as accurate as a Fluke certifier but it’s good enough to find the obvious problems, and it’s free.
  4. Swap the patch cable first. Before anything else, replace the short patch cable between the wall jack and the AP. We have a box of known-good Cat6A patches on every truck, because failed patches are the #1 cause of intermittent AP reboots and the cheapest thing to rule out.

Fixing it without re-pulling cable

Most of the time you don’t need to re-run the cable. The fix is one of:

  • Re-terminate both ends. Cut the last 18 inches off both ends, re-punch the keystone, re-crimp the patch panel side, swap the patch cable. This fixes 70% of loose-pair issues without touching the in-wall run.
  • Replace the injector with a properly- classed one. A UniFi U-POE-AT or a Mikrotik POE+ injector with classification is $20 and resolves the “wrong class” cases.
  • Move the AP to a different switch port. If a single switch port is misbehaving, the adjacent port often works fine. Document it, move the cable, and add the bad port to a “do not use” note in the rack documentation.
  • Bigger switch, fewer injectors. On installs we inherit with a fleet of midspan injectors, the right move is usually a single larger PoE switch and a half day of patch-panel cleanup. Cleaner physically, easier to monitor, and one fewer class of failure mode in the closet.

How we prevent this on a new install

The headline rule: no midspan injectors in a new build, ever. Spec a switch big enough to power everything from day one with the 70% headroom rule applied. On a typical 4,500 sq ft Lehi or Draper custom home with three or four APs, eight cameras, a video doorbell, two Access readers, and a smart garage controller, that means a 24-port PoE switch with at least a 500 W budget — sometimes a 600 W. It’s a couple hundred dollars more than the minimum and buys you the headroom you need for the next round of equipment additions.

Beyond the switch sizing, the things that prevent the 3 AM symptom on a fresh install:

  • Pull Cat6A everywhere a PoE+ or PoE++ device might eventually live. The few-cents-per-foot savings on Cat6 evaporates the first time you have to re-pull a run because the next-generation AP pulls more current than the cable can deliver cleanly.
  • Test every run with a real continuity tester before drywall closes. We use a Klein VDV Scout for fast pass/fail and a Pockethernet for actual length, pair map, and PoE detection. We don’t close drywall on a cable that hasn’t been validated.
  • Label both ends. The number of service calls that turn into “which jack is which” treasure hunts is comical.
  • Spec a properly sized UPS for the rack. Brownouts at the rack are an underdiagnosed cause of the same symptom — the switch momentarily can’t deliver full PoE during a voltage sag, and the AP at the far end of the line is the first to feel it.
  • Document the install in the controller. A properly named site map with port assignments means the next service call is a 10-minute diagnosis, not a two-hour archaeological dig. See our pre-wire checklist for the full new-build sequence.

When it’s actually the firmware

Worth saying: occasionally it really is a firmware bug. UniFi has shipped a couple of releases over the years where a specific AP model would reboot under specific channel conditions. If every AP in the house is rebooting at the same time, on the same firmware, with no PoE fault and no link down events — roll back the firmware, file a UniFi support ticket, and wait for the patch.

But that’s the exception. In our service-call logs across the Wasatch Front, the breakdown is roughly: 45% loose-pair / bad termination, 30% PoE budget or injector failure, 15% bad patch cable, 5% actual firmware bug, 5% something genuinely weird (mice in the attic, lightning damage, a kid who unplugged a cable to charge a phone).

Bottom line

If your APs reboot at night, the network isn’t haunted. It’s either underpowered, marginally terminated, or both. Pull the event log, check per- port PoE telemetry, run TDR on the suspect runs, and swap patches before you blame the cloud or the ISP. The fix is almost always a small physical-layer correction — a re-termination, a bigger switch, or a properly-classed injector — not a network redesign.

The systems that don’t do this overnight aren’t magic. They’re just specced with headroom from day one.

Keystone Integration installs UniFi networks across Holladay, Draper, Lehi, Park City, and the rest of the Wasatch Front — sized with PoE headroom, terminated to spec, and documented so the 3 AM reboot story doesn’t happen on our installs. See the full service list or get in touch for a service call on a network that’s been getting flaky.