Internet network as utility

A recent Slashdot question to readers on How Much Does Your Work Depend on the Internet? includes a number of posts remarking on how important redundancy is to running an effective network.

Also, Ars is noting that the fight over encrypted Bittorrent is coming down to the creation of systems which systematically examine each packet a user sends/receives.

This has brought a few questions to mind:

First, how technically effective could a network that is not redundant and not packet neutral be? The original DARPA net was designed to be neutral to withstand attack, but it was later found that this design also has the benefit of greater reliability from being able to route packets through the path of least resistance. Routers know where to send packets, but if they have to additionally take on the task of looking ‘inside the envelope,’ will they still be as effective (read: fast and reliable)?

Second, I am having a difficult time understanding the basis for bandwidth throttling of bittorrent (or any other resource intensive application) users. My understanding of purchasing a network service is that one is allowed to use the service in whatever legal way they see fit up to the capacity of what was purchased (be it minutes or kb/s). Bittorrent users may use a greater amount of this capacity over time…but didn’t they already pay for it?

Perhaps these are two other aspect of the definition of network neutrality: packet neutrality and service/content neutrality (or the freedom to use a service, in whatever capacity desired, as long as it’s within the bounds of time/’bandwidth’ limitations).