The complaint is always phrased the same way: “The Wi-Fi was down again at 3 AM.” A nest cam goes offline overnight, the bedroom Sonos drops, the kid’s Switch can’t reach the cloud save by morning, and by the time anyone tries to troubleshoot it, the network is back. Reboot it from the controller, everything looks fine, and the homeowner is told it must be the ISP. Two weeks later it happens again.
In our experience across the Wasatch Front, this specific symptom — APs that quietly cycle in the middle of the night and recover before anyone wakes up — is almost never the ISP and almost never an AP defect. It’s a power-delivery problem. Either the PoE budget on the switch is wrong, the midspan injector is doing something the homeowner didn’t know about, or one bad RJ45 pair is renegotiating link speed every few hours. Here’s how we diagnose each one.
Why APs reboot at 3 AM specifically
There’s a real reason these symptoms cluster overnight, and it has nothing to do with the time on the clock. Two things happen at low utilization that don’t happen during the day:
- The switch fan slows down. Most modern PoE switches have temperature-aware fans. With everyone asleep and traffic low, the fans drop to near-silent. The switch warms up. PoE controllers have a thermal shutdown threshold per-port and per-chassis, and a marginal install spends the day on the right side of it and the night on the wrong side.
- Background firmware and database tasks fire. UniFi, Aruba, and Meraki controllers all run overnight maintenance windows by default. APs do cron-driven log rotation and config syncs. A power rail that’s already at 95% of budget can hold steady through the day and trip during a five-second sync burst.
If the symptom is “Wi-Fi blinks every night around 3 AM,” the diagnosis flowchart is short. Pull the event logs out of the controller (we’ve got a walkthrough on reading your UniFi dashboard) and look for “adopted” or “LAN disconnected” events between 2 and 4 AM. If you see them, you have a power problem. The rest of this post is figuring out which kind.
1. PoE budget math: what the spec sheet says vs what your switch actually delivers
Every PoE switch advertises a total PoE budget. A UniFi USW-Pro-24-PoE has a 400 W budget. A USW-24-PoE has a 95 W budget. Read the fine print: that number is acrossall ports, not per-port, and it assumes a certain ambient temperature.
The math gets ugly fast. Take a fairly normal Lehi or Draper install:
- 4 × UniFi U7 Pro Max APs @ ~22 W = 88 W
- 2 × G5 Bullet cameras @ ~7 W = 14 W
- 2 × G5 Pro cameras @ ~12 W = 24 W
- 1 × G5 Doorbell Pro @ ~8 W = 8 W
- 1 × access-control reader @ ~6 W = 6 W
- 1 × VoIP phone @ ~4 W = 4 W
That’s 144 W of negotiated demand on a switch rated 95 W. The homeowner doesn’t see this on day one because PoE is opportunistic — the switch allocates power as devices come online, and at first boot only some are negotiating their full power class. A camera in 1080p mode pulls less than a camera that just had a smart-detection event. Adding one more PoE+ camera six months later pushes the chassis past its budget, and the lowest-priority device — usually an AP in the basement — gets cut.
Worse, most consumer-tier PoE switches don’t cut cleanly. They drop the AP, the AP reboots, the AP comes back online, the AP renegotiates 22 W on a chassis that’s already at the line, and you get a cycle. In the controller, this looks like an AP that flaps every 90 seconds for ten minutes, then stabilizes when some other device on the chassis idles down.
Two practical fixes. First, if you’re sizing a rack and you can plan ahead, buy the PoE switch one tier larger than the math says you need. Our managed-vs-unmanaged switch post gets into this. A USW-Pro-Max-24-PoE with a 400 W budget is roughly $200 more than a 95 W chassis and buys you years of headroom. Second, if you already own the smaller switch, prioritize the AP ports in config — UniFi lets you mark ports as “critical” so they don’t get shed first when the budget tightens.
2. 802.3af vs at vs bt: the silent class mismatch
A meaningful chunk of the “AP reboots at night” calls we get involve a homeowner who replaced a Wi-Fi 6 AP with a Wi-Fi 7 AP and didn’t look at the power class.
- 802.3af (PoE): ~15.4 W at the source, ~12.95 W at the device.
- 802.3at (PoE+): ~30 W source, ~25.5 W device.
- 802.3bt Type 3 (PoE++): ~60 W source, ~51 W device.
- 802.3bt Type 4: ~90–100 W source, ~71 W device.
A UniFi U6-Lite needs only PoE (802.3af). A U6-Pro wants PoE+ (802.3at). A U7 Pro Max wants PoE+ — but it will boot on PoE, just not stay up under load with a full 6 GHz radio active. We see this constantly: the AP installs, the homeowner sees five bars, everything looks great in daylight, and the moment the radio actually starts pushing real client load (typically evening when everyone’s home), the AP browns out, reboots, and comes back when the load drops again.
The diagnostic is fast. Look at the AP’s reported input power in the controller. If a U6-Pro reports 12.5 W and the device draw should be ~22 W, you’re feeding it 802.3af when it wants 802.3at. The fix is a switch port that supports the right class, or a midspan injector at the right tier.
3. The midspan injector problem nobody warns you about
On older homes — anything pre-wired before 2018 in Holladay, Bountiful, Murray, parts of Sandy — we routinely find AP runs that bypass the rack PoE switch entirely. An installer at some point ran a Cat5e to a ceiling box, dropped a single-port PoE injector at the rack, and called it good. That works for one AP. It fails quietly when you scale.
- Cheap injectors thermally fail. Single-port injectors live in a rack or closet, often crammed against a UPS that’s pumping out heat. The injector’s own thermal cutoff fires intermittently in summer, which on the Wasatch Front means anything from late June through September. The AP cycles. The injector cools off. The AP recovers. The homeowner thinks the AP is bad.
- Daisy-chained injectors compound power drop. Some installers, faced with a long cable run, will inline a second injector. This almost never works the way they expected — voltage drop on a marginal cable plus the second injector’s inrush behavior creates a renegotiation loop that looks identical to a flaky AP.
- Injectors don’t do per-port priority. A managed switch can prioritize critical PoE ports when the budget tightens. A standalone injector can’t. When something on the chassis upstream misbehaves, the AP just goes dark.
If you’ve got an AP on a midspan injector and you’re seeing nighttime reboots, the cleanest fix is to retire the injector and home-run the cable to a managed PoE switch. We do this on most retrofits — the rack has the cooling, the budget visibility, and the priority controls. A $40 injector saved someone $40 during the original install and now costs hundreds in troubleshooting.
4. The loose-pair problem: bad termination, intermittent symptoms
This is the one that’s hardest to find and almost always the actual cause when none of the budget math explains it. Ethernet has four pairs. Two are mandatory for 1 Gbps link. All four are mandatory for 2.5 Gbps and up. PoE rides on either two of the pairs (Mode A or Mode B in 802.3af/at) or all four (PoE++).
If one pair has a marginal connection — a punchdown that’s 95% seated, a keystone jack with a slightly bent contact, an RJ45 plug crimped a little too shallow — everything looks fine on the link until something stresses it. The link will train at 1 Gbps and hold for hours, then a thermal change in the wall causes the marginal contact to lose continuity for 50 milliseconds, the switch and AP renegotiate, and the AP power-cycles during the renegotiation because PoE briefly drops with the link.
The diagnostic markers:
- Repeated link speed renegotiation events in the switch logs — you’ll see the same port go from 1 Gbps to 100 Mbps to 1 Gbps over the course of a few minutes. A clean cable does not do that.
- FCS errors counters climbing slowly on the port that hosts the suspect AP. Anything above ~1 in 10 million packets is suspect. Anything above 1 in 1 million is a confirmed cable problem.
- The AP reboots only when the temperature swings. This is the giveaway — a basement that goes from 68 to 62 over the course of a Utah winter night, or an attic AP that bakes in summer and cools at night, expands and contracts the cable enough to flex a marginal contact. We’ve seen this on Cat5e runs in unconditioned attics in Lehi where the daily temperature swing is 40+ degrees.
The only real fix is a re-termination, or in stubborn cases a full re-pull of the cable. A proper certifier test (Fluke DSX or equivalent) will catch most marginal pairs immediately. A cheap continuity tester will not — it tells you all four pairs are present, not that any of them are clean. This is one of the reasons we certify every cable on a new build before sign-off; see our Cat5e vs Cat6 vs Cat6A guide for the broader cable-quality picture.
5. The Wasatch Front winter version of this problem
There’s a Utah-specific flavor that’s worth calling out. Outdoor APs and outdoor cameras feed through cable that often passes through unconditioned space — soffit boxes, attic crossings, garage chases. Cold contracts cable. We’ve seen identical installs in Park City and Heber where the AP runs clean April through October and starts cycling once nighttime temperatures drop below freezing. The cable is fine in summer because the contacts are tight at room temp. In winter, the metal contracts away from a marginal pin, the link drops, and you get an AP cycle every time the temperature crosses a threshold.
If you’ve got an outdoor AP serving a hot tub deck or a ski-storage area in Park City or Heber City and it works in summer but flakes in winter, suspect cold-induced contact failure first. Our outdoor Wi-Fi in Utah winter post covers the broader weatherization issues.
How we run the diagnostic on a real call
When a homeowner calls with the “reboots at night” symptom, this is the order we work through:
- Pull 30 days of switch port logs and AP event logs. Confirm the symptom is a reboot, not a controller disconnect. (A controller disconnect with the AP still up is a different problem entirely — usually firewall or a VLAN misroute.)
- Check the chassis PoE budget vs the sum of negotiated draws. If the chassis is past 80% of rated, that’s the cause until proven otherwise.
- Confirm the AP’s power class matches the switch port’s class. Document the reported input power in the controller.
- Look for FCS errors and link renegotiations on the port over the same window the reboot happened. If they correlate, it’s the cable.
- Walk the cable path. Touch every termination. Half the time the bad pair is at the keystone behind a cabinet someone leaned against during install.
- If nothing is obvious, swap the AP onto a port we’re sure is clean and let it run a week. If the new port behaves, the cable is the problem. If it still cycles, it’s the AP or the firmware.
This is the same checklist we walk through on every call. The vast majority of these never end up being a hardware defect. They’re a budget miss, a class mismatch, an injector hiding in a closet, or a marginal pair from a cable that was rushed at install.
How to avoid the problem on a new build
If you’re building or remodeling, the preventive measures are cheap during construction and expensive afterward:
- Spec Cat6A for every AP and PoE++ camera run, not Cat6. The headroom matters as PoE budgets climb.
- Home-run every AP cable to the rack. No splice boxes. No daisy chains. No midspan injectors.
- Buy the next size up on the PoE switch. The incremental cost is a rounding error compared to the labor cost of a service call later. Our server rack guide covers the layout decisions in detail.
- Certify every cable. A line-item on the install that says “Fluke DSX certification report delivered to the homeowner” is one of the cheapest insurance policies you can buy on a pre-wire. See our pre-wire checklist for the full list.
- Put the rack on a real UPS sized for the PoE load. Brownouts on the Wasatch Front during winter storms cause the same renegotiation cascade as a bad cable, and a UPS rides through them cleanly.
Bottom line
APs that reboot at night are almost always a power-delivery problem, not an AP defect or an ISP problem. The four most common causes — chassis PoE budget exceeded, power-class mismatch, midspan injector failure, and a single bad RJ45 pair — cover roughly 90% of the calls we go out on. Each is fixable in under an hour once it’s identified, but each is impossible to diagnose without the right event logs and port-level visibility, which is why we’re generally not enthusiastic about consumer routers and unmanaged switches on installs that have more than two APs.
If your network has been doing this for months and nobody can tell you why, it’s almost certainly one of these four. Get the logs, walk the cable, do the budget math.
Keystone Integration troubleshoots flaky AP and PoE installs across Holladay, Draper, Lehi, Park City, and the rest of the Wasatch Front — with the certifier in hand and the controller logs pulled before we touch anything. See the full service list or get in touch for a site visit if your APs are misbehaving.