All articles
May 5, 202612 min read

Why your AP keeps rebooting at night: PoE budgets, midspan injectors, and the loose-pair problem

AP that reboots at 3 AM is almost never a broken AP. It is a PoE budget that does not add up, a midspan injector cooking itself, the wrong PoE standard on the port, or a single bad pair in the cable run. Here is the order to actually find and fix the cause.

Wi-FiPoENetworkingTroubleshootingUniFi

The most frustrating support ticket in home networking is some variant of “the Wi-Fi was down again at 3 AM.” Nobody saw it. The phones still showed full bars in the morning. The kids’ FaceTime calls dropped at 11 PM, came back, dropped at 2 AM, came back. The UniFi controller has a little orange dot next to one of the APs from sometime overnight, and by the time you’re looking at it, the AP is back online and reporting clean.

When we get called for this, the homeowner has usually already power-cycled the AP, restarted the gateway, factory-reset something they shouldn’t have, and bought a replacement AP off Amazon. None of that fixes it, because the AP isn’t broken. It’s starving for power, or one of its eight copper pairs is intermittent, or both. Here is what actually causes nightly AP reboots on Wasatch Front installs and the order in which we troubleshoot them.

The symptom: an AP that reboots, not a router that crashes

Before anything else, confirm the symptom. A whole network going down at night is a different problem (usually the gateway, the modem, or the ISP) than a single AP reboot. The clues for an AP-specific problem look like this:

  • Devices in one part of the house lose Wi-Fi while devices in another part stay online.
  • The UniFi event log shows “AP disconnected” followed a minute or two later by “AP connected” on the same device.
  • The AP’s uptime in the dashboard is short — 6 hours, 14 hours — even though you haven’t touched it in months.
  • The pattern is roughly daily and roughly the same time of night.

If that’s the picture, you almost certainly have one of three problems: a PoE budget that doesn’t add up, a midspan injector that’s cooking itself, or a single bad pair in the cable run. We’ll cover each. If you’re not sure where to start reading the symptoms, our UniFi dashboard guide walks through the event log and per-AP stats that tell this story.

PoE budget: the thing nobody calculates

Every PoE switch has a total power budget — the sum of watts it can deliver across all PoE ports at once. A small UniFi switch like the USW-Lite-8-PoE has a 52 W budget. The USW-24-PoE has 95 W or 250 W depending on the SKU. The big PoE++ switches go to 400 W or higher. If the devices plugged in want more than the switch can deliver, something stops getting power — and in many cases what stops getting power is whichever device is the lowest priority or the most recent to negotiate.

The trap on home networks is that the math looks fine on paper at install time and goes wrong later. A typical pre-wire ends up with four to six APs and six to twelve PoE cameras hanging off one switch. Each AP draws somewhere between 8 W (a U6 Lite) and 25 W (a U7 Pro Max with full 6 GHz radios busy). Each camera draws 5–15 W depending on whether IR is on, whether it’s heating its enclosure in winter, and whether the pan/tilt motors are firing. Add a PoE doorbell, a PoE intercom, a pair of PoE phones, and the budget that looked comfortable in July is over the line in January.

Why “winter only” reboots are usually PoE

Outdoor cameras have heaters. Outdoor APs have heaters. They both kick on below freezing. We have Park City and Heber installs where the same network has run flawlessly for two summers and starts rebooting APs nightly the first cold week in November. Nine times out of ten that’s the heaters pulling the switch over its budget at exactly the time of night when the wall is coldest.

How to do the math correctly

Add up the worst-case PoE draw of every connected device, not the typical draw. Worst case is what the device is allowed to negotiate to under 802.3at/bt, not what it averages on a mild day. Then add a 25–30% headroom. If the total is inside the switch’s rated budget, you’re fine. If it’s within 10% of the budget, you’re going to have a bad winter.

Two practical fixes: split the load across two switches (one on the AP/camera side, one on everything else) or replace the switch with a higher-budget model. PoE++ switches with 400 W or 720 W budgets are dramatically cheaper than they were three years ago, and they’re a better answer than juggling injectors. Our managed-switch primer covers what to look for if you’re sizing one from scratch.

802.3af, at, and bt: which standard your AP actually wants

There are three PoE standards in common use, and they are not interchangeable on the high end:

  • 802.3af (PoE): up to 15.4 W at the source, 12.95 W at the device. Fine for older APs (UAP-AC-Lite, U6 Lite), VoIP phones, and most fixed-lens cameras.
  • 802.3at (PoE+): up to 30 W at the source, 25.5 W at the device. The sweet spot for current-gen mid-tier APs (U6 Pro, U6 Enterprise) and PTZ cameras.
  • 802.3bt (PoE++ or 4PPoE): up to 60 W (Type 3) or 90 W (Type 4) at the source. Required for the Wi-Fi 7 flagships (U7 Pro Max, U7 Pro XGS), high-power multi-radio APs, and some heated outdoor enclosures.

If you plug a Wi-Fi 7 flagship into a PoE+ port, two things can happen. The lucky case: it boots in “reduced power mode” with one radio dialed back and runs forever, but slower than spec. The unlucky case: it boots, comes online, hits a high client count or a busy radio window, exceeds what the port can deliver, and resets. Reset cycle, reconnect, work fine for an hour, hit the same wall, reset again. That is exactly what 3 AM AP reboots look like.

The fix is to put the AP on a PoE++ port. Check the switch’s per-port spec, not just the total budget — many 24-port PoE switches only have a few PoE++ ports, and the rest are PoE+. Match the AP’s requirement to the port’s capability.

Midspan injectors: the silent failure point

A midspan injector is a small brick that sits between a non-PoE switch and a PoE device, injecting power onto the cable. They’re cheap, useful, and a little bit cursed. We see more nightly-reboot tickets caused by failing injectors than by anything else on the list.

Why injectors fail in the field:

  • Heat. An injector running at 80–90% of its rated wattage in a closed network closet hits 60 °C internal temperature easily. Capacitors degrade. Output sags under load. The AP starts cycling.
  • Underspec’d for the AP. A homeowner buys a generic 30 W injector for a Wi-Fi 7 AP that wants 45 W. It works on the bench. It fails the first time the radios get loaded.
  • Aging. The 24 V passive injector that came in the box with a 2018-era AP is now seven years old and has been running 24/7 the entire time. Its electrolytic caps are well past nameplate life.
  • Daisy-chain power strips. An injector plugged into a power strip plugged into another power strip plugged into a UPS that’s alarming about voltage drop is a real configuration we’ve walked into.

Our default on a clean install is to skip injectors entirely. A switch with enough PoE++ ports for every AP eliminates an entire class of failures. If injectors are unavoidable — retrofit, a single AP added after the fact — we use ones rated 50% above the device’s draw, mount them where they can shed heat, and replace them on a five-year cycle whether they seem to need it or not.

The loose-pair problem

Ethernet runs on four twisted pairs. Gigabit uses all four. PoE runs power on either the spare pairs (Mode A or B) or on top of the data pairs (4PPoE, used by 802.3bt). If one of the eight conductors is intermittent — not severed, just a marginally bad crimp at one end — the link can stay up but PoE negotiation can fail.

What this looks like in practice: the AP gets power, boots, comes online, runs for a while, and the cable’s thermal expansion or a slight vibration shifts the bad pair just enough that power negotiation re-runs. The AP loses power for a few seconds, reboots, re-negotiates a working connection, and runs again until the next time the temperature changes. This is why loose-pair problems are so often nightly — the temperature in the wall and in the rack changes most around 2–4 AM as the HVAC cycles slow.

How to find a bad pair without an expensive tester

A real cable certifier (Fluke DSX, Versiv) will find this in 30 seconds, but they cost more than most homeowners want to spend. Three field tricks that work without one:

  • Force the link to gigabit. A cheap pair tester or even a spare laptop can confirm whether all four pairs are passing. 100 Mbps only uses two pairs — if the link comes up at 1 Gbps clean, all four pairs are at least continuous. If the link drops to 100 Mbps under any condition, you have a bad pair.
  • Check the UniFi switch’s port statistics. CRC errors, FCS errors, and runt frames climbing on a single port point straight at a marginal cable. The dashboard surfaces these per-port; the dashboard guide shows where to find them.
  • Re-terminate both ends. If a cable is acting up and it’s under 100 ft, just cut both ends back an inch and re-crimp with fresh keystones or plugs. This fixes 80% of loose-pair calls because the most common cause is a punchdown that didn’t fully seat one conductor.

If the cable is the run from the rack to a soffit-mounted AP and re-terminating doesn’t fix it, the next step is replacing the cable. That’s a real labor cost in a finished home, which is why we push hard for proper cable-category and termination decisions during pre-wire on new construction — pulling a second cable to every AP location costs almost nothing during framing and saves a real headache later.

Power events: the third common cause

Even with good cable and a fat PoE budget, an AP can reboot because something upstream lost power and the switch did too. A brownout, a momentary ISP outage that triggers a gateway reboot, a UPS that switched to battery and back — all of those propagate down the chain. If multiple APs reboot at exactly the same moment, that’s power-event behavior, not a per-AP problem.

Wasatch Front summer storms produce a lot of short brownouts and surges that don’t fully kill power but do kick equipment into a reboot. A properly sized network UPS keeps the rack online through these events; we covered the math in our rack UPS sizing post and the broader smart-home-during-an-outage post.

The order we actually troubleshoot in

  1. Read the dashboard. Confirm it’s a single AP rebooting, not the whole network. Note the time and frequency.
  2. Check port stats. CRC and FCS errors point at cabling. Clean ports point at power.
  3. Add up the PoE budget. Worst-case draw of every device on the switch, plus 25%. Compare to switch rating.
  4. Verify the standard. Confirm the AP’s required PoE class (af/at/bt) matches what the port can supply.
  5. Replace or remove injectors. If an injector is in the path, swap it for a known-good unit or move the device to a direct switch port.
  6. Re-terminate the cable. Both ends. If errors persist, run a temporary replacement cable to confirm.
  7. Replace the cable if the temp run fixes it.
  8. Replace the AP only after everything above is ruled out. Hardware failure does happen, but it’s the last suspect, not the first.

On Wasatch Front installs in Holladay, Lehi, Park City, Heber City, and Draper, the breakdown of root causes we see is roughly: 35% PoE budget, 25% injector failure, 20% cabling, 10% upstream power event, 10% genuinely bad hardware. The cheap fixes solve three quarters of the calls.

Designing this out from the start

Most of these problems are avoidable on day one if the network is sized for what’s actually going to live on it.

  • Spec a switch with at least 50% headroom over today’s PoE draw. Devices added in year two and three should fit without re-doing the rack.
  • Match port-level PoE class to the AP class. If you’re running Wi-Fi 7 anywhere, the switch needs PoE++ ports on those runs. We compared the U7 Pro Max and U6 Pro for exactly this kind of decision.
  • Avoid daisy-chained injectors entirely on new builds. Run home-run cable from the rack to every AP location and let the switch do the work.
  • Pull two cables to every AP location during pre-wire. The second cable is a cheap insurance policy against the loose-pair problem ten years from now.
  • Choose Cat6 or Cat6A for AP and camera runs. The category guide covers when Cat5e is fine and when it isn’t.
  • Document each port: AP model, PoE class, watts drawn, cable length. Future-you will thank present-you the next time the dashboard goes orange.

Bottom line

APs that reboot at 3 AM are almost never broken APs. They’re an underspec’d PoE budget, the wrong PoE class on the port, a cooking injector, or a single bad pair in a 75-foot cable run that you can’t see. Each of those has a clean fix, and none of them require buying a replacement AP off Amazon. Read the dashboard, add up the watts, check the port standard, and re-terminate the cable — in that order.

Keystone Integration designs and installs Wi-Fi networks across Holladay, Alpine, Draper, Park City, Heber City, and the rest of the Wasatch Front — with PoE budgets sized for the way the system will actually be used and cabling done so the 3 AM reboots never start. See the full service list or get in touch if your AP keeps coming back to life every morning.