Ethernet Bandwidth

From official 10 Mbps to 7 Mbps of available bandwidth (above, saturation)

Aside questions to Markus Buchhorn :

Where those that limitation commes from ?

The 70% number came from shared Ethernet, where everybody effectively shared a single cable, and only one machine could talk at a time - statistically you could not run data on it more than 70% of the time, and it is 10Mb/s total across all the connected devices.
Switched Ethernet is different, in that it's only you and the switch on that single cable.
What about 100 Mbps and 1Gbps ethernet ?
Both of these are only run in switched mode, not in shared mode.
Over that, you need to take care of the usual protocol overhead costs

Is there some way (tables...) to easily evaluate these costs or is it necessary to sniff your network each time you want to be sur that you have enough bandwidth available (Of course, you have to do that on a shared network, but on your local ethernet, you should be able to be more in charge ?

You can look at tables or standards documents to identify protocol overheads. In general these are "pretty small", around 10% or so.
In terms of measuring if you have enough bandwidth available - you can't do that deterministically on any frame/packet based network. You cannot predict when another device will send a burst of large packets, which fill up buffers, or a very large number of small packets, which chew up the CPU. This can vary on timescales from milliseconds to mega-minutes. All you can do is look at the path and identify where you might see congestion. Applications running over IP have to be able to deal with congestion, leading to latency and jitter, or leading to packet loss, or routing changes that also impact latency and jitter. A sad fact, but that's also the power of the Internet.
On your local network, the main bandwidth constraint is the switch in the middle, and then any other applications that are talking to your two endpoints. If you have two PCs on their own on a dedicated switch and no other applications, you should be able to get close to 100% throughput. I've seen over 95Mb/s transport rate on a 100Mb/s Ethernet network.
If you have a need to guarantee bandwidth, there are some ways you can do that - Ethernet and IP both allow traffic to be tagged with 'priorities', giving those frames/packets better (or worse) treatment than other traffic. That requires support from the hardware along the way though, and is not a 100% guarantee in all cases. To get guarantees like that requires a more deterministic network like ATM and its virtual circuits.

A PDU is 8 bits/byte times 144 bytes or 1152 bits.

NB. : Mbps = 1000 x 1000 bps, kbps = 1000 bps, Gbps = 1000 Mbps

Meaning we can send about 6,076 PDUs per second on our Ethernet LAN.

If we assume mayhem, all players firing once each second, we can make some statements about the outer bounds of what we can support.

So we can support between 184 to 759 players on our LAN, assuming away all other usage.

NPSNET-IV maxed out at about 300 players.