Pull to refresh

The world in which IPv6 was a good design (2017) (apenwarr.ca)

by signa11 166 comments 231 points
Read article View on HN

166 comments

[−] Dagger2 25d ago
Our world. It was a good design in our world.

I don't think v6 is the absolute pinnacle of protocol design, but whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6. If people consistently can't do better than v6, then I'd say v6 is probably pretty decent.

[−] zrail 25d ago

> they end up coming up with something equivalent to IPv6

Not just that. Almost every single thing people think up that's "better" is something that was considered and rejected by the IPv6 design process, almost always for well-considered reasons.

[−] dwattttt 25d ago
The converse also happens: people look at something IPv6 supports and says "that's crazy, why would that be allowed/designed for", without knowing that IPv4 does it too.
[−] Dagger2 25d ago
Or frequently, considered and accepted. 6to4 is a popular one to reinvent.
[−] api 25d ago
In retrospect I think just adding another 16 or 32 bits to V4 would have been fine, but I don’t disagree with you. V6 is fine and it works great.

All the complaints I hear are pretty much all ignorance except one: long addresses. That is a genuine inconvenience and the encoding is kind of crap. Fixing the human readable address encoding would help.

[−] pocksuppet 25d ago
If you add new bits to v4 you invent an incompatible protocol, and you should add a lot of bits so you'll never have to invent another incompatible protocol again. You can also fix the minor annoyances in v4.
[−] bombcar 25d ago
Flexible! The first byte tells you how many bytes of addressing you have. Perfect and future proof!
[−] tremon 25d ago
Hardware implementations typically do not like variable-size fields. Not just because the total header size becomes unpredictable, but because it means any following fields no longer have a fixed offset, and that complicates parsing.
[−] jasomill 25d ago
At best future-resistant.

True future-proofing would require representing address length as an arbitrary-precision nonzero unsigned integer.

Since allowing a zero-length network address format would serve no purpose other than to pointlessly complicate standards definitions, you could trivially and without loss of generality interpret zero to denote some extended-length address length representation to be defined in a future version of the standard.

[−] perennialmind 25d ago
IPv4 was designed with extension headers: it boggles my mind that simply using the headers to extend the address was never seriously considered. It was proposed: https://www.rfc-editor.org/rfc/rfc1365.html

It still would have been a ton of work, but we could have just had what IPv6 claimed to be: IPv4 with bigger addresses. Except after the upgrade, there'd be no parallel system. And all of DJB's points apply: https://cr.yp.to/djbdns/ipv6mess.html

[−] api 25d ago
Here’s my understanding.

The people involved in core Internet protocol design were used to the net being a largely walled garden of governments, corporations, universities, and a small number of BBSes and niche ISPs.

Major protocol upgrades had happened before, not just for the core protocol but all kinds of other then-core services.

It had been a while but not that long, I think less than 20 years, and last time it was pretty easy. They assumed they could design something better and phase it in and all the members of the Internet community would just do the right thing.

That’s probably what made them feel they could push a more radical upgrade.

Unfortunately they started this right as the massive tsunami of Internet commercialization hit. Since V6 was too new, everyone went with V4. Now all the sudden you had thousands of times more nodes, sites, and personnel, and all of them were steeped in IPv4 and rushing to ship on top of it. You also lost the small town atmosphere of the early net where admins were a club and could coordinate things.

Had V6 launched five years earlier V4 would probably be dead.

V6 usage will probably keep creeping up, but as it stands we will likely be dual stack forever. Once the installed user base and sunk cost is this high the design is fixed and can never be changed without a hard core heavy handed measure like a government mandate.

[−] bsder 25d ago

> Had V6 launched five years earlier V4 would probably be dead.

Not a chance. IPv6 ate way more memory than IPv4 and memory was expensive back in 1995. Even IPv4 proliferation was chewing up memory and that was why the IETF introduced Classless Inter-Domain Routing (CIDR) in 1993 which gave us subnet masking.

Memory cost was a problem in routing tables until after both the DotBomb and the TeleBomb.

[−] pocksuppet 25d ago
They weren't all that wrong. NAT was an incompatible protocol upgrade - that's why it broke protocols that made pre-NAT assumptions, like FTP - but it kept most of them working. DNS64 is also an incompatible protocol upgrade that breaks protocols that make pre-DNS64 assumptions, like hardcoding addresses - but it keeps most keeps of them working.

In DNS64, whenever your DNS resolver encounters an IPv4-only site, it translates it to an IPv6 address under a translator prefix, and returns that address to the client. The client connects to the translator server via that address, and the translator server opens an IPv4 connection to the website. Your side of the network is IPv6-only, not even running tunneled v4.

This only breaks things to about the same small extent that the introduction of NAT did.

[−] iknowstuff 25d ago
iOS is benefitted from a heavy handed mandate so that it and all of its apps sing on IPv6 only networks. They just need to expose IPv4 internet as IPv6 addresses.
[−] Dagger2 25d ago
I said "whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6", and that's what you did here. And as predicted, it was 6to4 you reinvented.

v4 extension headers are well known to get your packets dropped on the Internet, so they're a non-starter, but there's another extension mechanism you can use: you can set the "next protocol" field to a special value, then put the extended address at the start of the payload, followed by the original payload. This is functionally identical to using extension headers, but using a mechanism that doesn't get your packets dropped.

Far from not being seriously considered, this approach was adopted in v6 as RFC 3056.

> Except after the upgrade, there'd be no parallel system.

No. You get a parallel system because v6 addresses are too big to work with v4. Even if you used extension headers, v6 addresses would still be too big to work with v4. Whatever you do, v6 addresses are too big to work with v4. You WILL get a parallel system, and there's no way around this other than not making the addresses bigger.

[−] perennialmind 25d ago
The hopes were for a converged software stack, but the candidates were all parallel protocols competing with IPv4. A full transition would end with the extinction of IPv4. Upgrading IPv4, quite apart from the brass tacks of the wire format, would have entailed variable-length addresses and even the idea of starting a new protocol with 64-bit addresses with an upgrade path was considered far too scary at the time. That was only one of a slew of non-technical requirements imposed from above for future proofing, NIH paranoia, vague security promises and politics in general.

A decade later, when IPv6 had real-world deployments was far to late for 6to4 to save the day: entirely because a swath of non-6to4 addresses existed and needed to be reachable. Given no strategic gain apparent for upgrading the commercial core, aligning financial interests by upgrading past the edge instead would absolutely have made sense. Unfortunately the hard parts the engineers anticipated in the early 90s were not the ones that held IPv6 back.

In summary, I agree: 6to4 could have been great!

[−] Dagger2 24d ago
Yes, of course they were all parallel protocols -- because your problem here is that v4 doesn't _have_ variable-length addresses. It's trivial to imagine a version of v4 that does, but that version would also be a parallel protocol to the version of v4 we actually have.

> even the idea of starting a new protocol with 64-bit addresses with an upgrade path was considered far too scary at the time

No it wasn't? Every proposal had an upgrade path. Having one was a mandatory requirement.

You can read the requirements document yourself: https://datatracker.ietf.org/doc/html/rfc1726. To me, it looks like these requirements were decided by the community rather than being imposed from above, but either way you can see that having a simple transition from v4 is listed right there.

> A decade later, when IPv6 had real-world deployments was far to late for 6to4 to save the day: entirely because a swath of non-6to4 addresses existed and needed to be reachable

What I'm hearing is that the compatibility with v4 that 6to4 provides wasn't considered important, and not by people in any position of authority but rather by the actual people choosing what to deploy on their own networks. Even though there were more 6to4 hosts than non-6to4 ones, and even though 6to4 doesn't prevent you from reaching those non-6to4 hosts, people still didn't want it.

[−] fortran77 25d ago

> Fixing the human readable address encoding would help

Yes! They need an alternate encoding form that distills to the same addresses.

My machines Link-local IPV6 address is "fe90::6329:c59:ad67:4b52%8"

If I try to paste that into the address bar in Edge or Chrome (with the https://) it does an internet search on that string! No way around it.

I have to do workarounds like: "http://fe90::6329:c59:ad67:4b52%8.ipv6-literal.net:8081/

All to test the IPv6 interface on a web server I'm running on my local machine.

[−] Dagger2 25d ago
Blame the WHATWG for that. They're the reason that v6 addresses in URLs are such a mess. http://[fe90::6329:c59:ad67:4b52%8]:8081/ should work, but doesn't because they refuse to allow a % there. (This is really damned frustrating, because link-locals are excellent for setting up routers or embedded machines, or for recovering from network misconfigurations.)

If it's on the same machine then just use http://[::1]:8081/. Dropping the interface specifier (http://[fe90::6329:c59:ad67:4b52]:8081/) works if the OS picks a default, which some will. curl seems to be happy to work. Or just use one of the non-link-local addresses on the machine, if you have any.

The other frustrating part of this is that it makes it impossible to come up with your own address syntax. An NSS plugin on Linux could implement a custom address format, and it's kind of obvious that the intention behind the URL syntax is that "[" and "]" enter and exit a raw address mode where other URI metacharacters have no special meaning. In general you can't syntax validate the address anyway because you don't know what formats it could be in (including future formats or ones local to a specific machine), so the only sane thing to do is pass the contents verbatim to getaddrinfo() and see if you get an error.

But no, they wrote the spec to only allow a subset of v6 addresses and nothing else.

I very much didn't test it, but this patch might do the job on Firefox (provided there's no code in the UI doing extra validation on top):

  --- a/netwerk/base/nsURLHelper.cpp
  +++ b/netwerk/base/nsURLHelper.cpp
  @@ -928,3 +928,3 @@ bool net_IsValidIPv4Addr(const nsACString& aAddr) {
   bool net_IsValidIPv6Addr(const nsACString& aAddr) {
  -  return mozilla::net::rust_net_is_valid_ipv6_addr(&aAddr);
  +  return true;
   }
[−] pocksuppet 25d ago
An IPv6 literal hostname in a URL must be surrounded by square brackets.
[−] fortran77 25d ago
Chrome and Edge still do a search on it in my default search engine even with []

https://[fe80::5ad6:9567:26b7:763b%18]:8081/

Even Hacker News doesn't think it's a link

[−] 1718627440 25d ago
On Mozilla Firefox after reenabling the separation into URL and searchbar it reports: "Invalid URL – Hmm. That address doesn’t look right. \n Please check that the URL is correct and try again." What does the '%' mean in there?
[−] pocksuppet 24d ago
For link-local addresses, the part after % identifies the link. It's platform-specific - in Linux it's the interface name and in Windows it's an ID number.
[−] pocksuppet 25d ago
You would have ended up with a protocol identical to IPv6, but with fewer address bits.

If you add *any* address bits you've already broken protocol compatibility and you need to upgrade the entire world. While you're already upgrading the entire world, you should add so many address bits that we'll never need more, because it costs the same, and you may as well fix those other niggling problems as well, right?

[−] vbezhenar 25d ago
IPv4 is absolutely fine. Consumers can be behind NAT. That's fine. Servers can be behind reverse proxies, routing by DNS hostname. That's also fine. IPv4 address might be a valuable resource, shared between multiple users. Nothing wrong with it.

Yes, it denies simple P2P connectivity. World doesn't need it. Consumers are behind firewalls either way. We need a way for consumers to connect to a server. That's all.

[−] Lt_Riza_Hawkeye 25d ago
You're the reason I have to call my ISP to host a minecraft server for a couple of my friends.
[−] mort96 25d ago
No, they're not. That's other weird policies specific to your ISP.

With IPv4 + NAT, you have a public IP address. That public address goes to your router. Your router can forward any port to any machine on your LAN. I used to run Minecraft servers from a residential connection on IPv4, it was fine. Never had to call the ISP.

[−] Symbiote 25d ago
This assumes the ISP allocates a public IPv4 address.

In many countries they don't have enough, so you have CGNAT.

[−] mort96 25d ago
That's a fair point. In my mind, residential ISPs give out public IP addresses and CGNAT is just for cell phones. But I recognize that the philosophy of, "we don't need to solve IP address exhaustion, we just need to keep people able to access Facebook" leads to CGNAT or multi level NAT.

Still, I do think that the solution of, "one IPv4 address per household + NAT" is a perfectly good system. I view the IPv6 mentality of giving each computer in the world a globally unique IPv6 address as a non-goal.

[−] wyufro 25d ago
Even if you go with one IPv4 per household + 1 per company you're going to be hard stretched to find room for that in 32 bits, at least after you add the routing infrastructure.
[−] pocksuppet 25d ago
There are more households than IP addresses. They can't all have one each. So you need longer addresses, and then you're already reinventing IPv6.
[−] mburns 25d ago
There are roughly twice as many IPv4 addresses as households globally.
[−] mort96 25d ago
That's not enough.

For one, businesses and other entities also need Internet access. Cloud companies in particular needs a ton of addresses. That's gonna eat up a fair chunk of the remaining 50%.

Two, humanity is still growing, governments across the world are building new housing. That's gonna eat up another chunk.

Three, routing is hierarchical, and infrastructure organisations and ISPs are assigned blocks of addresses, not individual addresses. We can't just have a pool of free IP addresses and assign any address to any house in the world as needed. So even having 50% of IP addresses free wouldn't really be enough.

So in my mind, an IP addresses to household ratio of 0.5 means residential CGNAT is inevitable, even if we ignore legacy issues like individual universities and other institutions owning gigantic /8 or /16 ranges.

[−] tremon 25d ago
Regardless of the actual number, I'm pretty sure that IPv4 addresses are not proportionally assigned to each region according to # of households.
[−] happymellon 25d ago

> That's a fair point. In my mind, residential ISPs give out public IP addresses and CGNAT is just for cell phones.

If you are giving out public IPs then you aren't really NAT'ing.

[−] mort96 25d ago
Hm? The ISP gives one IP address to a router in a house, that router uses NAT to let all the computers inside that house use the Internet through the one single shared public IP address. That's NAT, isn't it?
[−] IcePic 25d ago
Well, in a strict sense, it is "you" who chooses to run a nat'ing router there, you could just have one single computer per ISP connection. Or have it run a proxy for you, or nat.

I mean, I understand that this feels normal today, that 10-20-50 devices need internet and that the way to manage that is to nat the connections, but your ISP isn't doing nat, it is you.

[−] voxic11 25d ago
Nope, CGNAT means I need to call my ISP. We now have 2 levels of NAT because the IPv4 address situation has gotten so bad they can't even give every residence its own public IP. If your ISP hasn't adopted it yet its likely they got lucky and bought a ton of IPv4 addresses a long time ago when they were cheap and have decided using them is cheaper than upgrading their network to support CGNAT.
[−] Spooky23 25d ago
Nope. If you get assigned a routable IPv4 IP, you just have a shit ISP. I led the rollout one of the larger O365 implementations. Outlook and the office stack needed like 10-16 ports per user. We served like 150k people with 30 outbound IPs. If you have an IP, you have 64k+ ports to use.

I also deployed it as a pilot on an internal network. Other than getting direct IPv6 connectivity to some services, which sometimes gave us better performance, it conferred no advantage to us.

IPv6 is great for phones where you don't expect any inbound traffic. Even then, every US carrier is using Carrier NAT to route and proxy traffic for their own purposes.

[−] unethical_ban 25d ago
I'm glad I have a shit ISP, then. So shitty being able to host my own software.
[−] Spooky23 25d ago
The “don’t” was missing. Honestly, I give up with Siri dictation. Either my voice has changed or it’s changed in a way that it doesn’t like my cadence or diction.

Either way, mea culpa.

[−] unethical_ban 24d ago
Ah, no problem. It ended up with me doing a half day deep dive on IPv6 internals.
[−] estimator7292 25d ago
Yeah, if you ignore literally every use of the internet except "check Facebook" then it's perfect.

Unfortunately, the internet is used for a lot more than using one of the six gigantic centralized websites.

[−] RIMR 25d ago

> Yes, it denies simple P2P connectivity. World doesn't need it.

Worth pointing out that this article was written by the now-CEO of Tailscale. I don't know if "The world doesn't need P2P connectivity" is a compelling take.

[−] Fnoord 25d ago
IPv4 usage in its current state would've been much more limited and annoying in a world without IPv6. Therefore, IPv4 exists as-is thanks to others adopting IPv6.
[−] throw0101a 25d ago

>

IPv4 is absolutely fine. Consumers can be behind NAT.

I don't want our communications infrastructures to be just for consumers.

[−] jrm4 25d ago
This comment exemplifies my worst fear and reinforces my somewhat incomplete idea that IPv4 is perhaps overall safer for the world, and that "worse is better" depending on what you're optimizing for.

Roughly, it's my belief that an IPv6 world makes it easier for centralizing forces and harder for local p2p or p2p-esque ones; e.g. an IPv6 world would have likely made it easier to do bad things like "charge for individual internet user in a home."

The decentralization of "routing power" is more a good thing than bad, what you pay for in complexity you get back in "power to the people."

[−] kalleboo 25d ago
A lot of us don't like this "you will own nothing and you will be happy" kind of energy.
[−] notepad0x90 25d ago
You know that's not what he meant. the world is always changing. it was designed in 1998 by networking gear companies, with their own company needs in mind. It wasn't engineered with end user, or even network administrators and app developers in mind.

The only reason it's around is because of sunken cost fallacy and people stuck in decades old tech-debt. A new protocol designed today will be different, much the same as how Rust is different than Ada. SD-WAN wasn't a thing in 1998, the cost of chips and the demand of mobile customers wasn't a thing. supply/demand economics have changed the very requirments behind the protocol.

Even concepts like source and destination addressing should be re-thought. The very concept of a network layer protocol that doesn't incorporate 0RTT encryption by default is ridiculous in 2026. Even protocols like ND, ARP, RA, DHCP and many more are insecure by default. Why is my device just trusting random claims that a neighbor has a specific address without authentication? Why is it connecting to a network (any! wired,wireless, why does it matter, this is a network layer concern) without authenticating the network's security and identity authority? I despise the corporatized term "zero trust" but this is what it means more or less.

People don't talk about security, trust, identity and more, because ipv6 was designed to save networking gear vendors money, and any new costly features better come with revenue streams like SD-WAN hosting by those same companies. There are lots and lots of new things a new layer-3 protocol could bring to the scene. But security aside, the main thing would be replacing numbered addressing with identity-based addressing.

It all comes down to how much money it costs the participants of the RFC committees. given how dependent the world is on this tech, I'm hoping governments intervene. It's sad that this is the tech we're passing to future generations. We'll be setting up colonies on mars, and troubleshooting addressing and security issues like it's 2005.

[−] m463 25d ago
you're implying that they could not have done better.

I think they "shipped it" and washed their hands of it.

But I think there should have been more iterations, until we got a little more ipv4+ and less ipv6.

[−] mort96 25d ago
I don't like this post's negativity towards ARP. ARP is the reason we can have IP networking on a LAN without a router. The default gateway just becomes a special case of general IP networking on a LAN.

Otherwise, the networking history part of this post is amazing. I haven't gotten to the IPv6 part yet.

[−] themanualstates 26d ago
What is this article even on about? The stuff on my network assigns itself ipv6 addresses based on their mac address? That's how you can do stateless ipv6?

Regardless, ipv6 was to have more IP addresses because of ipv4 exhaustion and NAT?

My Xbox tells me my network sucks because it doesn't have ipv6, but this is a very North-American perspective regardless.

[−] p4bl0 26d ago
Thanks for sharing this very interesting read.

There's one point I don't really get and I would be glad if someone could clarify it for me. When the author says that even over wifi, the CSMDA/CD protocol is not used anymore. Then how does it actually work?

Discussing this, the author explains:

> If you have two wifi stations connected to the same access point, they don't talk to each other directly, even when they can hear each other just fine.

So, each station still has to decide at some point if what its hearing is for them or not, as it could be another station talking to the AP, or the AP talking to another station. How is that done if not using CSMA/CD (or something very similar at least)?

[−] ianburrell 25d ago
I have come to think that having both SLAAC and DHCPv6 were a big flaw in IPv6. SLAAC is awesome but having two config mechanisms is confusing. It doesn't help that Android refuses to support DHCPv6.

I think SLAAC came from world where computers were expensive, DHCP servers were separate, and they wanted to eliminate them. But we are in world where computers are cheap and every router can run DHCP.

We could have had easy config with DHCPv6 giving out MAC based addresses by default. The auto config would still work on link-local.

[−] globular-toast 26d ago
This is one of my favourite blog posts ever. For those unaware (or who didn't read right to the bottom), the author is the CEO of Tailscale.

One of the problems we have is when we're born we don't question anything. It just is the way it is. This, of course, lets us do things in the world much more quickly than if we had to learn everything from basic principles, but it's a disadvantage too. It means we get stuck in these local optima and can't get out. Each successive generation only finally learns enough to change anything fundamental once they're already too old and set in their ways doing the standard thing.

How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.

[−] AlienRobot 25d ago
Very interesting post. I never considered the fact that IPv6 was going to be more than just a bigger IPv4.

Also funny it was made in 1990 and it only recently reached 50% adoption.

[−] xyzelement 25d ago
I remember when ipv6 seemed like an inevitable next step. The fact that it fizzled seems like the problem it was trying to solve just doesn't matter? We somehow found enough ipv4 addresses to make the whole thing keep working just fine (from practical end user perspective) which seems like we never truly needed ipv6? Is that the wrong conclusion?
[−] PunchyHamster 26d ago

> Now imagine that X changes addresses to Q. It still sends out packets tagged with (uuid,80), to IP address Y, but now those packets come from address Q. On machine Y, it receives the packet and matches it to the socket associated with (uuid), notes that the packets for that socket are now coming from address Q, and updates its cache. Its return packets can now be sent, tagged as (uuid), back to Q instead of X. Everything works! (Modulo some care to prevent connection hijacking by impostors.2)

And how the fuck anything in-between knows where to route it ? The article glows a blazing beacon of ignorance about everything in-between.

The whole entire problem with mobile IP is "how we get intermediate devices to know where to go?" we're back to

> The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical.

Which author hinted at then forgot. We can't have globally routable, unique, random-esque ID precisely because it has to be hierarchical. Keeping connection flow ID at L4 instead of L3+L4 changes very little, yeah, you can technically roam the client except how the fuck server would know where to send the packet back when L3 address changes ? It would have to get client packet with updated L3 address and until then all packets would go to void.

But hey, at least it's some progress ? NOPE, nothing at protocol layer can be trusted before authentication, it would make DoS attacks far easier (just flood the host in a bunch of random uuids), and you would still end up doing it QUIC way of just re-implementing all of that stuff after encryption of the insides

[−] Sniffnoy 26d ago
(2017)
[−] amelius 25d ago
Are there any parts in the design of v6 that provide opportunities for companies to further enshittify our experience of the internet?
[−] nyrikki 25d ago
Everyone forgets that the Internet Architecture Board took a religious view on "Internet transparency and the end-to-end principle" which was counter to the realities of limited tooling and actual site maintainers needs. [0]

There were many of us who, even when it was still IPng (IP Next Generation) in the mid 1990's, tried to get it working and spent significant amount of effort to do so, only to be hit with unrealistic ideological ideals that blocked our ability to deploy it, especially with the limitations of the security tools back in the day.

Remember when IPng started, even large regional ISPs like xmission had finger servers, many people used telnet and actually slackware enabled telnet with no root password by default!!! I used both to get wall a coworker who was late to work because he was playing tw2000.

Back then we had really bad application firewalls like Altavista and PIX was just being invented, and the large surveillance capitalism market simply didn't exist then.

The IAB hampered deployment by choosing hills to die on without providing real alternatives, and didn't relent until IPv4 exhaustion became a problem, and they had lost their battle because everyone was forced into CGNAT etc...because of the IETF, not in spite of it.

The IAB and IETF was living in a MIT ITS mindset when the real world was making that model hazardous and impossible. End to end transparency may be 'pretty' to some people, but it wasn't what customers needed. When they wrote the RFCs to make other services simply fail and time out if you enabled IPv6 locally, but didn't have ISP support they burned a lot of good will and everyone just started ripping out the IPv6 stack and running IPv4 only.

IMHO, Like almost all tech failures, it didn't flail based on technical merits, it flailed based on ignorance of the users needs and a refusal to consider them, insisting that adopters just had to drink their particular flavor of Kool-aid or stick to IPv4, and until forced most people chose the latter.

[0] https://www.rfc-editor.org/rfc/rfc5902.txt

[−] xacky 25d ago
[flagged]
[−] allexpensespaid 25d ago
I just read something about IPv8! Can anyone confirm this is real?
[−] tschaeferm 26d ago
Why repeating the old news?
[−] NooneAtAll3 26d ago

> Internet routing can't handle mobility - at all.

so all the fairy tales about IP invented for nuclear war was a lie? the moment military started moving around, IP became useless?