> PoE Standards Overview (IEEE 802.3)
For the record, 802.3bt was released in 2022:
* https://en.wikipedia.org/wiki/Power_over_Ethernet
It allows for up to 71W at the far end of the connection.
The UART standards didn't specify bit rates - which allowed the same standard to scale all the way from 300 bps in the 1960's all the way up to 10+ megabps in the 90's.
Why can't POE standards do the same?
Simply don't set voltage or current limits in the standard, and instead let endpoint devices advertise what they're capable of.
> Simply don't set voltage or current limits in the standard,
There are thermal and safety limits to how much current and voltage you can send down standard cabling. The top PoE standards are basically at those limits.
> and instead let endpoint devices advertise what they're capable of.
There are LLDP provisions to negotiate power in 0.1W increments.
The standards are still very useful for having a known target to hit. It’s much easier to say a device is compatible with one of the standards then to have to check the voltage and current limits for everything.
That would require them to know the standard of cable they're connected with
Unless you like home and warehouse fires
Or if you want to add per port fuses. That sounds incredibly expensive.
The standard is, well, a standard, and that’s why PoE is safe in the first place. Adding per-port fuses won’t stop bad cable from burning, because the fuse would have to be sized for the rating of the PoE switch.
This is why you don’t want “fake” Cat6 etc. cable. I’ve seen copper-clad aluminum sold as cat6 cable before, but that shit will break 100% of the time and a broken cable will absolutely catch fire from a standard 802.at switch.
There are also distance limits based on the type of cable used and the power drawn by the end device. The more you push that, the more heat you build. Shielding reduces that heat factor.
https://en.wikipedia.org/wiki/Power_over_Ethernet#Power_capa...
For Dayjob I power a lot of very expensive not-even-on-the-market-yet radios and other equipment via multiple PoE standards, mixed vendors, 2 pair, 4 pair, etc via POE and we have ran into all kinds of POE problems over the years.
POE fires do happen. Sometimes it's the cable, the connector, sometimes something happened to the cable run. Sometimes the gear melts.
https://www.powerelectronictips.com/halt-and-catch-fire-the-...
> There are also distance limits
It should be noted that there are two standards (of course) for Ethernet cabling, and one (TIA) officially hardcodes distances (e.g., 100m) but the other (ISO) simply specifies the signal-to-noise has to be a certain limits which could allow for longer distances (>100m):
* https://www.youtube.com/watch?v=kNa_IdfivKs
A specific product that lets you go longer than 100m:
As for your note about PoE standards btw, I remember an old joke, something along the lines of "The best thing about standards is that there are so many to choose from!"
---
Non-standard implementations There are more than ten proprietary implementations.[49] The more common ones are discussed below.
https://en.wikipedia.org/wiki/Power_over_Ethernet#Non-standa...
Proper PoE sources have active per port current monitoring and will disable the PoE power in case of an over current event.
You can be over the thermal capacity of the cable without having too much draw on the port.
So the situation would be: you create a setup, buy devices and then they randomly shutdown as they pull too much current? How is that better than having a well defined standard that ensures compatibility?
The device first negotiates a certain current capability. If a device explicitly asks for 15W and then goes on to draw 60W, you can hardly call it a "random" shutdown: it is clearly misbehaving, so it is best to shut it down to prevent further damage.
Does POE+++++ measure the cable? If not, there's nothing in the protocol stopping you from overloading the cable.
Have you ever ran a DC voltage calculation for voltage drop for a cat5/6/7 cable?
It can be substantial. But yes, there are cable spec requirements for POE depending on the demands of the device!
NEC as of 2017 has new standards and a whole section for PoE devices above 60W now, specifically a section on safety and best practices. It DOES have cable requirements that do impact the cable standard chosen.
More info on that here: https://www.panduit.com/content/dam/panduit/en/landing-pages...
From: https://reolink.com/blog/poe-distance-limit/?srsltid=AfmBOop... --- PoE Distance Limit (802.3af)
The original 802.3af PoE standard ratified in 2003 provides up to 15.4W of power to devices. It has a maximum distance limit of 100 meters, like all PoE standards. However, because of voltage drop along Ethernet cables, the usable PoE distance for 15.4W devices is often only 50-60 meters in practice using common Cat5e cabling.
In addition, this piece of note from Wikipedia: https://en.wikipedia.org/wiki/Power_over_Ethernet#Power_capa... ---
The ISO/IEC TR 29125 and Cenelec EN 50174-99-1 draft standards outline the cable bundle temperature rise that can be expected from the use of 4PPoE. A distinction is made between two scenarios:
bundles heating up from the inside to the outside, and bundles heating up from the outside to match the ambient temperature
The second scenario largely depends on the environment and installation, whereas the first is solely influenced by the cable construction. In a standard unshielded cable, the PoE-related temperature rise increases by a factor of 5. In a shielded cable, this value drops to between 2.5 and 3, depending on the design.
PoE+ Distance Limit (802.3at) An update to PoE in 2009 called PoE+ increased the available power to 30W per port. The formal 100-meter distance limit remains unchanged from previous standards. However, the higher power budget of 30W devices leads to increased voltage drops during transmission over long distances.
PoE++ Distance Limit (802.3bt) The latest 2018 PoE++ standard increased available power further to as much as 60W. As you can expect, with higher power outputs, usable distances for PoE++ are even lower than previous PoE versions. Real-world PoE++ distances are often only 15-25 meters for equipment needing the full 60W.
I understand all that but... let's imagine I run 60W 802.3bt over 100m of cat5. The voltage drop will be bad. So what actually happens? Does the device detect voltage droop and shut off? Or does the cable just catch on fire?
A longer cable won't just catch fire, because the power dissipation per unit of length is the same regardless of overall length. Imagine the most extreme case - a cable so long the voltage difference is 0V at the end. It's basically just a very long resistor dissipating 60W. But each meter of cable will be dissipating the same power as every other PoE setup.
Looping the cable or putting it in a confined space could cause issues. The cable could then catch fire even though it appeared to be operating normally to the PoE controller.
In my understanding, it depends on the device and the cable installation. Let's say it's a 59W device so it doesn't fall under NEC regulations as of 2017 for PoE devices over 60W.
The device needs a certain amount of power to keep itself alive. Depending on how the device is designed and if actually adhering to standards, the device should simply not have enough power to start at say 80m, or let's say they pushed the install from the get-go (happens all the time) and it's actually 110m on poor / underspec'd cable.
And let's say the device has enough power to start, but you're using indoor cat5 and it's been outdoor for 7 years, and you don't know this but it's CCA. If it's in a bundle with other similar devices drawing high power and there is enough heat concentrated at a bend, then yes, the cable could catch fire without the device having a problem. As long as the device has enough power it's going to keep doing its thing until the cable has degraded enough to cause signal drop and assuming it's using one of the more modern 4pair PoE standards, would just shut off. But that could be after the drapes or that amazon box in the corner of the room caught fire.
We're just lucky in the residential space that PoE hasn't been as "mass market" as an iphone, and we've been slowly working into higher power delivery as demands have increased.
IMO? It's all silly though. We should just go optical and direct-DC whenever possible ;)
Depending on switch vendor and quality, they can actually increase the voltage output - the spec iirc allows for up to 57V at PSE which an intelligent switch can modulate to overcome limited voltage drop - cheaper switches (desktop etc) just supply all ports 54V (or less, but it should be 54 at source) from the same rail without any modulation
The specs have a voltage range that the source can put out and a corresponding range of voltages that the device must accept at the end of the wire, after voltage drop.
Longer wires don’t increase the overheating risk because the additional heat is divided over the additional length.
I mean.. once you have to start to buy specialized cables anyway, you might as well have specified in the PoE++[0] standard that only specialized PoE++ cables are accepted, verified by device handshake. And then you can engineer the whole thing to basically be "Powerline but with higher data rates", making the cable power-first instead of data-first.
[0]What a horrible naming. PoE 2, 3, 4 etc would have been much better..
Well, as of 2017 we have -LP marked cables now.
https://www.electricallicenserenewal.com/Electrical-Continui...
The standards basically specify the minimum power the source is supposed to be able to supply, and the maximum power the other end can sink.
because power delivery depends on a lot of other things. The most important one i could think is the cable, the ethernet cable is a dumb one, no way to tell its capability. USB-C solved this problem with the E-marker chip, which basically transform the dumb cable into smart one.
Even so, the PD protocol limits how much power can be transferred.
802.3bt changes how the wires are used physically. Power can now be negotiated to be delivered over previously data-only lines.
It was finalized in 2018, and by 2020 there were commercial offerings from major vendors. I know this as we developed a 802.3bt product in 2018.